MRT++ Design Issues and Brief Reference - Semantic Scholar

1 downloads 0 Views 530KB Size Report
Jul 23, 1998 - MRT++. Design Issues and Brief Reference. Dieter W. Fellner, Stephan Schäfer. Department of ..... Message clipPoint() takes care of clipping a line segment on the boundaries of the view volume. ...... width of circle */.
MRT++ Design Issues and Brief Reference Dieter W. Fellner, Stephan Schäfer Department of Computer Science, University of Bonn {fellner,schaefer}@graphics.cs.uni–bonn.de 23rd July 1998

Abstract Currently the two major obstacles for the limited growth of 3D application development to become a mainstream technology for everyday use are a) the computational and rendering requirements of 3D and b) the lack of a programming model that is appropriate for widespread use by developers who are not experts in the field of 3D graphics. But as the hardware gets faster, software will become the critical factor in the further growth of 3D application development. In this paper we present a software architecture for a 3D rendering package which currently operates in several modes: it either ray-traces the scene, creating photorealistic images, or it transforms the scene into a set of planar approximations generating graphics primitives typically required for radiosity computations as well as supported by standard graphics packages. The renderer is object-based rather than drawing based and consists of an extensible set of objects that perform a variety of operations. The 3D objects as well as the imaging objects (like Image, Screen, Light) are the building blocks that lend themselves to programmer customization through techniques such as subclassing. State-of-the-art functionality and advanced algorithms can be incorporated into this renderer with a minimum amount of programming (i.e. analysis/understanding of existing code and creation of new code). A thorough test of this approach has been carried out by using the renderer as the platform for research tasks as well as for teaching and for lab assignments in several undergraduate and graduate courses at two different universities. Experiences with this (inhomogeneous) user population prove that the system meets its design goal of being highly customizable and extendable. Categories and Subject Descriptions: I.3.3 [Computer Graphics]: Picture/Image Generation — display algorithms; I.3.4 [Computer Graphics]: Graphics Utilities — Graphics packages; Software support; I.3.6 [Computer Graphics]: Methodology and Techniques — device independence; I.3.7 [Computer Graphics]: Three Dimensional Graphics and Realism — color, shading, shadowing, and texture. General Terms: Design Additional Keywords and Phrases: Extensibility, Customization

MRT++ – Design Issues and Brief Reference

2

1 Introduction 3D graphics has not yet become a mainstream technology for everyday application and user interface development. One reason for this limited growth is that the computational and rendering requirements of 3D are beyond the performance capabilities of most machines. The other major reason for the slow proliferation of 3D is that software libraries available today do not provide a programming model that is appropriate for widespread use by developers not familiar with 3D graphics programming. The latest generation of RISC-based workstations and personal computers are quite capable of meeting the 3D performance challenge. As the hardware gets faster, software will become the critical factor in the further growth of 3D application development. Currently available software libraries are of two types. Hardware drawing libraries, such as SGI’s GL, HP’s Starbase, and SUN’s XGL, provide pixel and graphics primitive drawing commands as a software layer above hardware devices (typically a frame buffer). Structured drawing libraries, such as GKS [ISO85], PHIGS+ [ISO92], HOOPS [WC91], and Doré [Kub91] provide structured drawing commands that are abstracted from the low level hardware interface. All of these systems, however, are variants of display list technology. Designed to simplify the task of building applications using synthetic 3D graphics, these models imply that the same program organization and methods of user interaction are suitable for all graphics applications. While these techniques have served the graphics community well, they are being stretched by the size and complexity of today’s applications. In addition, they appear to be inadequate for dealing with new issues such as multimedia, time-critical computing, and asynchronous user input. The next generation of 3D software toolkits will be object-based rather than drawing based [Wiß90]. They will be composed of extensible sets of editable 3D objects that perform a variety of operations. Rendering will be one of the many operations that each object implements. The 3D objects will be building blocks that lend themselves to programmer customization through techniques such as subclassing. This paper discusses a software architecture for a 3D rendering package independently developed to Inventor [SC92]. It can operate in several modes: it either creates photorealistic images by ray-tracing the scene, or it creates polygonal approximations of the scene and utilizes standard graphics packages like PHIGS+ or CGI-3D [FFW93] (which in turn makes use of XGL [Sun93] or OpenGL [Sil93] whenever available) for the output to the display device. The polygonal approximation step also serves as a startingpoint for radiosity calculations and can be modified itself to implement radiosity-related meshing techniques. The primary goal was the design of an object-based (in contrast to drawing-based) renderer consisting of a well structured and extensible set of objects that support all necessary operations to build a full-fledged rendering system for evaluation purposes during undergraduate and graduate teaching. As the name Minimal Rendering Tool indicates, we tried to keep the renderer as minimal as possible. Nevertheless, experiences with the system prove that state-of-the-art functionality as well as advanced algorithms can be (and have been) incorporated into this renderer with a minimum amount of programming. The last section gives a brief summary of our experiences with a quite inhomogeneous user population performing tasks which are typical for software engineers building, maintaining, or enhancing rendering or visualization packages.

2 Object-Oriented Programming With its roots in the late 1960s (a language called Simula 67) the object-oriented paradigm offers a new approach to the process of software development. The fundamental concept of the object-oriented paradigm or the major difference between object-oriented and most conventional (imperative) programming languages is that problem solutions are implemented by sending messages to objects. According to the proponents of objectoriented languages, most conventional languages are concerned with data: data structures are being defined and sent to procedures as parameters. This is in contrast to object-oriented programming where the objects in a solution have to be defined along with messages to which each object will respond.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

3

To evaluate what object-oriented programming can and can’t do, we need to recall the five key components of the object-oriented paradigm. These are: 

object: an encapsulated abstraction that includes data information and a well defined set of messages to which it responds.



message: a special keyword, symbol, or identifier (with or without parameters) that specifies the action to be taken by an object.



class: is defined by a class description that identifies both the messages (the accessing protocol) for an object of that class and the state variables. Classes may be organized in hierarchies with subclasses (or derived classes) inheriting properties from their parent class (or base class).



instance: objects are instances of a class. The properties of any instance are defined by the class description of its class.



method: defines how a message to an object is to be implemented.

The properties each object-oriented language must support are: 

abstraction and encapsulation: objects are quite frequently spoken of as encapsulations of abstractions as each object within itself contains both the knowledge of how to respond to a set of messages and the values for its internal state (as defined by the class description for an object of its class). In addition, the internals of an object (methods and state variables) are invisible to and unalterable by other objects.



inheritance: A subclass inherits properties from all its base classes and typically adds additional features of its own. Inheritance, together with abstraction and encapsulation, is the key to software reuse as it allows the reuse of data structures and object behavior whenever a new class is defined.



3

polymorphism: is characterized best in terms of its properties: Overloading of function names enables the programmer to use the same function name as a message to objects of different classes, each object responding accordingly. Operator Overloading is motivated by the demand of consistency of usage. Just think of overloading the arithmetic operators for a class representing 3D vectors or colors defined by their red, green, and blue components. Late binding and virtual functions are essential for proper handling of messages in object hierarchies (the same message being sent to objects that may be instances of a class within a hierarchy and respond in a different way). Generics are essential whenever a class has to represent a data structure (e.g. a list or queue) that accepts any element types.

Basic Concepts of Ray-Tracing

Since its development by Turner Whitted [Whi80] ray-tracing has become a very popular technique to synthesize ‘photorealistic’ images [Gla89]. The key concept is illustrated in Figure 1: Rays are cast from the observer through the center of each pixel onto the scene. If the ray doesn’t hit any object, the pixel is colored with the scene’s background color. Otherwise, the pixel color is evaluated from the color of the light emitted at the intersection point closest to the observer, say point P, in the direction of the ray. To achieve a photorealistic impression the color at point P has to be computed by evaluating the light coming from other objects or light sources and reflected or refracted by that object. This computation can be done by recursively tracing the reflected and transmitted rays. The resulting color of a point is the sum of the color computed locally and the colors contributed by the reflected and the transmitted ray. The equation reads:

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

4

P

screen

Figure 1: Ray-tracing: reflected rays and transmitted rays contribute to intensity at point P

I

=

IL + cre f IR + ctrans IT with IL IR IT cre f

... ... ... ...

ctrans

...

intensity computed locally intensity contributed by reflected ray intensity contributed by transmitted ray constant to control reflection, cre f 2 [0; 1] (0 = not reflecting, 1 = perfect mirror) constant to control transmission, ctrans 2 [0; 1] (0 = opaque, 1 = 100% transparent)

To take care of shadows, so called illumination rays have to be sent from each intersection point to all light sources. If a light ray hits another object in the scene, the intersection point is in the object’s shadow and thus cannot receive any light from that light source. Figure 2 illustrates the parameters used to define the modified pinhole camera model as commonly used in computer graphics with the parameters observer (eyepoint), view up vector (VUV), horizontal and vertical field of view (hfov, vfov), center of interest (lookpoint), and the screen dimensions screenx and screeny . The world that finally appears on the screen lies within the infinite pyramid defined by the observer and the view window with the top cut off at the view window.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

5

View Window

screeny f irstray

VUV

lookpoint

screenx

vf ov hf ov

observer Figure 2: The modified pinhole camera model

4

Architecture of MRT

The Minimal Rendering Tool (MRT) was not designed to compete with commercial rendering packages but to serve as a testbed for the implementation of new algorithms (or, in case of student assignments, the implementation of existing ones) and as a tool for teaching [Fel92]. Currently MRT operates in one of two modes: it either creates photorealistic images by ray-tracing the scene,1 or it creates polygonal approximations of the scene. These polygons can now be used as starting patches for a radiosity computation or directly be rendered by CGI3D [FFW93], an abstract 3D device interface based on the concept of CGI [ISO91]. CGI3D utilizes existing graphics hardware through calls to OpenGL [Sil93] or XGL [Sun93] library functions on Unix systems and through calls to OpenGL or Direct3D on MS-Windows systems, provided these packages are supported on the actual system. By introducing CGI3D the programmer does not need to worry about the functionality of or the interface to the underlying graphics packages (mapping of geometric objects to primitives supported, parameters of the camera model, definition of light sources, ...). CGI3D serves as a high-level object-oriented interface to these packages as well. Without dedicated hardware the whole rendering process is done in software.2 The programming language C++ [ES90] has been chosen due to its availability on almost all hardware and software platforms. Thus, a minimal familiarity with the language constructs of C/C++ will be assumed in the following. For the discussion of the building blocks the various classes of the renderer are grouped according to their functionality.

4.1

Mathematical Elements

The basic data type representing all non-integral values is called t_Real.3 It has been introduced to encapsulate machine specific details (e.g. machine accuracy for type double, handling of roundoff errors, ...) and to provide the flexibility to change the underlying representation of non-integral values in a convenient way. Introducing a new type of arithmetic (e.g. using rational numbers instead of floating-point arithmetic or using methods described in [SGV94]) only requires the redefinition of this type together with the overloading of all standard 1 MRT

could as well stand for Minimal Ray Tracer also applies to shading techniques like Phong shading which is typically not supported by third party rendering libraries. 3 The prefix t_ has been introduced to avoid name conflicts with classes defined in other packages (e.g. class Color in InterViews). 2 This

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

6

mathematical operators and functions. Class t_3DVector implements a 3D vector or 3D point and class t_4x4Matrix implements a 4  4 transformation matrix. All classes, as well as the following ones, use the concept of operator overloading to enable the programmer to formulate the algorithms in a natural way. For example, (P+Q)/2 gives the point half way between P and Q and n = u*v assigns the vector normal to the plane defined by vectors u and v to vector n, such that u,v and n form a right-handed system.

4.2

Color

Class t_Color implements the representation of a color together with a set of overloaded operators. This way rendering in more than three frequency intervals (by default red, green, and blue) or a different treatment of ‘overexposed’ pixels can be achieved by modifying this class only.

4.3

Objects

Class t_Object is the base class for all types of objects that can appear in a 3D scene. There are several categories of objects derived from this class: Geometric objects (t_SurfaceObject) are primitive rendering object that have a 2D surface, like spheres or bezier patches. Volumetric objects (t_VolumeObject) are primitive rendering objects that simulate the effect of participating media in a scene, like smoke or athmosphere. Container objects (t_Scene) structure sets of primitive objects spatially or logically into sub-scenes. Reference objects (t_RefObject) are wrappers around other objects that allow for memory efficient handling of objects which only differ marginally, eg. by position or size. All these classes are described in more detail in the next sections. Their common base class t_Object provides the following virtual methods: t_BVol boundingVolume() returns the bounding volume (whose implementation is encapsulated in class t_BVol). bool intersect() computes the intersection between the object and a ray (see Section 4.4). It returns true iff an intersection can be found at a positive distance less than a given value (which stores the distance to the closest object found so far). bool checkIntersect() just performs a Yes/No intersection test which is sufficient for the treatment of illumination rays and which, in most cases, can be done significantly faster than computing the intersection point. However, the default implementation is based on method intersect(). Another version, called bool checkBoxIntersect() is provided for the initialization of spatial data structures to accelerate the ray-object intersections. It determines if the object has a non-empty intersection with an axes-aligned box. The default implementation compares the box with the object’s bounding volume. t_3DVector surfaceNormal() computes the normal to the surface at a given point (pointing outwards). Note: for most objects the given point has to be on the object’s surface. Otherwise the result might be undefined. bool approxShape() takes a quality control parameter and creates a faceted approximation of the object stored in a winged edge data structure similar to [Bau75] to maintain the facets’ adjacency information. The facets (boundary representation [BFH95]) can be retrieved with t_BRepPtr boundary(). With the number of generated facets depending on the quality control, methods approxShape() and boundary() are the key elements for visualizing an object (and thus the whole scene) using a lower level 3D drawing package as well as serving as a starting point for finite element algorithms like the radiosity method [CW93]. The return value of approxShape() indicates if the approximation has been changed as a result of the new quality value. This information is essential for the efficient control of screen refreshes. t_Real inside() returns an inside/outside measure in [0; 1] on a given point with respect to a solid object: inside> 0:5 , point is inside object (with 1.0 returned at the object’s center), inside= 0:5 , point is on the based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

7

object’s surface, inside< 0:5 , point is outside object (with 0.0 returned at infinite distance). This message allows objects to be used in CSG (constructive solid geometry) expressions. bool mapInvers() maps a point on the surface of that object onto a two-dimensional texture space defined over [0; 1]  [0; 1]. By default, this message returns false indicating that 2D texture mapping is not supported for this specific object. t_ShaderPtr shader() gives access to the object’s shader that encapsulates material properties as well as shading methods. See Section 4.5. 4.3.1

Geometric Objects

The base class for geometric objects (i.e. solids) is class t_SurfaceObject, which is derived from the classes t_Object and t_Approximation. Class t_Approximation is responsible for maintaining the object’s boundary representation. Method adjustVertex() takes a given vertex and moves its position to the closest location on the object’s surface. This is useful to enhance the shape of a geometric object after refining a face, which is the key to efficiently render curved objects with the hierachical radiosity algorithm [Sch97]. MRT currently supports some 20 geometric objects (see the Appendix for a detailed list). Additionally CSG expressions with arbitrary objects can be built. The resulting CSG objects support ray tracing as well as a polygonal representation, i.e. a BRep. 4.3.2

Volumetric Objects

Volumetric objects are needed to render atmospheric effects or to display medical data. Class t_VolumeObject contains a pointer to volume data and an optional cut region. The cut region is represented by a geometric object (see Section 4.3.1) which defines a region in the volume data to display. Classes t_LevoyVolume and t_BlinnVolume are derived from t_VolumeObject and implement the shading models of Levoy (contour surfaces) and Blinn (atmospheric effects). Class t_VolumeData contains a bounding volume of the data set and provides a way to traverse it. The traverser samples the volume data in regular steps (methods init() and step()) and returns the data value (currData()) and the gradient (currGradient()) at the current position. Derived classes implement continuous and discrete volume datasets. Examples for continuous datasets are noise- or turbulencefunctions. Sampling a discrete dataset is implemented by class t_VDRegularGrid which can be initialized by volume data stored on hard disk. 4.3.3

Object Containers

It’s worth mentioning that the whole scene as well as all sub-scenes are handled consistently as elementary scene objects. I.e., class t_Scene is also derived from class t_Object and thus also provides the above messages. This illustrates the power of late binding and is an elegant way of handling arbitrary complex hierarchies of scenes: to compute the intersection of a ray and a particular scene the scene first tests for an intersection with its own bounding volume (computed after loading the scene description by message boundingVolume()) and then sends the message intersect() to all objects it contains which can either be elementary objects or scenes. The same mechanism applies to messages boundingVolume(), checkIntersect(), and approxShape(). Different schemes to accelerate the computation of intersections between a ray and a scene may be realised by deriving new classes from class t_Scene and overriding its intersect() messages. 4.3.4

Reference Objects

Objects may also appear as reference objects, which are based on t_Object and only contain two transformation matrices, a surface and a reference pointer to a t_Object. The creation of an instance of an object just increments a counter variable inside the reference pointer of the according reference object and returns the object pointer. All messages sent to t_Object are forwarded to the referenced object with transformation of the embedded vector data. This leads to reduction of memory usage but is especially useful in the context of animation. based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

8

Display lists of polygonalized objects have to be compiled only once for objects of the same type and need only be associated with the according matrix transformation thus reducing the bottleneck in rendering pipelines.

4.4

Rays

Most rendering algorithms require intersection tests between a ray and one or more objects. Not only ray tracing but also form factor calculations for radiosity, collision detection, shadow calculation or object picking can make use of a ray casting module. All parameters needed to launch a ray into a specific direction and to check if and where an intersection occured are encapsulated in two classes. A ray (class t_Ray) is defined by its origin and its direction. To avoid intersections with the object that launched the ray, a pointer to this object can be supplied. This is useful to avoid numerical errors when computing rays that bounce off a surface. An optional search distance bounds the result to a fixed interval. A call to the intersect()-Method of a scene object (see Section 4.3) may update the ray with an intersection object (class t_SurfaceIntersection), which contains the hitpoint and hitobject. However, this update only takes place, if the intersection occurs within a distance closer than the distance stored in the ray. If a volume object was hit, a t_VolumeIntersection is stored in the ray. Instead of a single hitpoint, volume intersections contain an entry point and an exit point to describe the ray’s intersection with the volume.

4.5

Shader

Class t_Shader is the abstract base class for all illumination models and gives access to the surface parameters to implement the Phong illumination model as well as the functions diffuseCoefficient(), specularTransCoefficient(), incidentLight(), and attenuateLight(). Virtual message diffuseCoefficient() returns the diffuse reflection coefficient of the surface at a given point, which is overloaded by surfaces providing texture mapping. The specular transmission coefficient (specularTransCoefficient()) is overloaded by surfaces providing transparent texture maps. Virtual function incidentLight() returns the color of the surface at a given point computed globally, i.e., considering the influence of the surrounding scene. This method has to be overloaded to implement a new illumination model. Virtual function attenuateLight() is used to attenuate the intensities of transmitted rays. Thus derived transparent surfaces can attenuate filtered rays, which have intersected the according object during the shadow check. Similar to class t_Object, two classes are derived from t_Shader to seamlessly model the interaction of solids as well as volumes with light. These are described in the next sections. 4.5.1

Surface Shader

To each surface object a surface shader (class t_SurfaceShader has to be attached. To allow for an easy extensibility, the reflection model and the illumination model provided by the shader are encapsulated in two separate classes, t_SrfIlluminator and t_Reflector. The method incidentLight() of class t_SrfIlluminator implements the Phong illumination model I=

ka Ia + kd ∑ ILq (N  LLq )

L



q

+ ks Is + ∑ ILq (V  RLq )

c

. . . ambient light . . . diffuse reflection . . . specular reflection

Lq

+ kt It

. . . transmitted light

with the local reflection vector R and the local vector V to the observer. To support the computation of global illumination according to the Monte Carlo technique, class t_Reflector provides methods to evaluate the BRDF and BTDF of surfaces. Reflected or transmitted rays according to a surface’s BRDF can be generated by calls to method importanceSample(). based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

9

Additionally, class t_SurfaceShader provides the following methods: attenuateLight() attenuates the intensity of shadow rays linearly by the shader’s transparency. Virtual message perturbNormal() is provided as a hook for derived surface shaders to change the surface normal for a given point and object to implement bump mapping. Class t_Bump provides this feature by accessing a user supplied file and interpreting the stored pixel values as perturbation vectors. Other derived surface shaders implement 2D texture mapping (reflective and transparent) as well as 3D or solid texture mapping. 2D texture mapping can access indexed color or true color bitmaps. In the case of transparent textures an additional bitmap containing alpha values is supplied and modulates the specular transparent coefficient read from the texture file. Shaders for solid textures are provided to model materials like marble or wood. 4.5.2

Volume Shader

Volume objects do not contain a shader but are themselves derived from class t_Shader. This is useful because shading a volume has to take the complete volume object into account. Thus classes t_BlinnVolume and t_LevoyVolume come with their own derived methods incidentLight() and attenuateLight().

4.6

Light Sources

Class t_Light implements a positional light source and serves as the base class for all light sources. The key of defining new types of light sources like spot lights or directional light sources is the virtual function directLight(). Given an object, a point P on its surface, and the level of recursion, it computes the vector L to the light source, the distance to the light source, the scalar product of L and the normal N in P, and the brightness of the light source received at P. Message directLight() returns SHADOW iff point P is shadowed by any object in the scene. Area light sources of arbitrary geometry are supported by class t_LightObject. A light object contains the BRep of a geometric object (Section 4.3.1) and provides the necessary sampling methods. Class t_AttenuatedLight accounts for the attenuation due to transparent scene objects. All types of light sources (point light, spot light, directional light) also come in a distributed version, i.e. they simulate a penumbra by casting jittered sample rays from the light source.

4.7

Camera Model

Class t_Camera implements the modified pinhole camera model as shown in Figure 2. After being initialized with the viewing parameters message getRay() returns the initial ray from the observer through pixel (x; y). Different camera models can be implemented by deriving new screen classes which redefine message getRay(). The supported projection models are perspective and parallel selected at the time of construction or with method projectionMode(). The corresponding projection matrix can be calculated with method calculateProjectionMatrix(). It can also be retrieved from the class for direct use by the application program. In addition to message getRay() the camera provides mapping of a 3D point in world coordinates to NDC (normalized device coordinates in a 2D projection plane [0; 1]  [0; 1]) and mapping to NPC (normalized projection coordinates [0; 1]  [0; 1]  [0; 1]) with methods projectToNDC() and projectToNPC(), respectively. Message clipPoint() takes care of clipping a line segment on the boundaries of the view volume.

4.8

Rendering

Class t_Image is the entry point to all provided rendering algorithms. Method rayTrace() contains the main loop over all pixels for ray tracing based algorithms. For each pixel to be computed one or more sample rays are launched into the scene. Protected messages gammaCorrect() and formatOutput() take care of gamma correction and formatting of output, respectively. Adaptive or stochastic anti-aliasing can be performed by derived classes (e.g. class t_AAAImage for Adaptive Anti-Aliasing) redefining virtual message rayTrace().

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

10

Message approximate() takes the faceted approximation of the scene created by message approxShape() sent to all objects4 and displays it with the polygon renderer CGI3D. Currently supported shading options are wireframe, flat, Gouraud, and Phong with one of the illumination models ambient, diffuse, Phong, monochrome, or depth-cuing. Additionally CGI3D supports different rendering libraries like OpenGL, XGL or Direct3D, to exploit hardware based rendering if available on the current platform. Message radiosity() contains the main loop for finite element based radiosity algorithms. After generating a faceted approximation of the scene the virtual radiosity methods of class t_IllumScene are called. These methods (radiosityInit(), radiosityStep() and radiosityRender()) have to be provided if other radiosty algorithms should be incorporated. Method writeInventor() can be called to save the radiosity solution in the Open Inventor 2.0 format.

Image

IllumScene

Scene

Object

rayTrace()

background()

boundingVolume()

boundingVolume()

approximate()

color()

intersect()

intersect()

radiosity()

radiosity()

checkIntersect()

checkIntersect()

triangulate()

approxShape() inside()

CGI 3D

OpenGL XGL Software

mapInvers()

Camera

Surf.Shader

getRay()

diffuseCoefficient()

project()

perturbNormal()

Light

shade()

lightEquPars()

CGI++

surfaceNormal()

Figure 3: Building blocks of the Minimal Rendering Tool

4.9

Input

Extensibility for the input handling as well as support for different scene description formats is achieved by using the parser generator tool PCCTS. Appendix A lists the syntax of the scene description grammar. Additionally to the generic MSD syntax, a parser for VRML 1.0 is provided as well. For building the internal data structures and to implement the local scoping rules a set of classes is provided to stack surfaces, objects, groups of objects transformed by a transformation matrix, and include files. Subscenes are automatically created whenever processing an included scene description and whenever a set of objects is embraced by a BEGIN ... END construct. Bounding volumes of subscenes are computed automatically using the message unify() of class t_BVol which combines two bounding volumes into one. 4 In fact, message approxShape() is only sent to the object at the top of the hierarchy (see discussion of message intersect() in section 4.3.1).

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

5

11

MRT at Work

MRT can operate in different modes, depending on the chosen rendering algorithm: it either creates static images by ray-tracing the scene, or it creates polygonal approximations of the scene and uses CGI3D for the output to the display device. Between the creation of the polygonal approximation and the final rendering radiosity algorithms can compute a global illumination solution and provide vertex colors for each polygon which then can be Gouraud-shaded by CGI3D. In the appendix a source code example is given to illustrate the required steps to obtain a rendered image.

5.1

Ray Tracing

Without going into detail of the internals of the ray-tracing algorithm the main steps during the ray-tracing of an image are the following: 1. Processing of the scene description and construction (i.e. initialization) of all objects and associated data structures (e.g. hierarchy of bounding volumes or octree or regular space subdivision). 2. Message rayTrace() of object image (an instance of class t_Image) loops over all pixels sending message getRay() to object camera to get an initial ray from the observer through the pixel. That ray is passed to function incidentLight() of class t_IllumScene (an illuminated scene) which computes the color of the scene seen along the direction of the ray. incidentLight() performs this computation by first computing the closest object intersecting the ray (i.e. sending message intersect() to object scene). For reasons of efficiency class t_Scene implements method intersect() by first sending message checkIntersect() to the bounding volume of the (sub)scene. If checkIntersect() doesn’t report an intersection, message intersect() is not propagated further down the hierarchy. If the ray doesn’t hit any object, incidentLight() returns the scene’s background color (returned by message background()). Otherwise the color at the intersection point is computed by sending message shade() to the surface of that geometric object. shade() in turn evaluates the illumination equation for the ambient component and the diffuse and specular reflection component (by sending message surfaceNormal() to the geometric object and message lightEquPars() to all light sources). In case of a reflecting or transmitting surface, the colors along the reflected and transmitted ray are computed (recursively) sending message incidentLight() to the illuminated scene object.

In addition to method rayTrace() class t_Image provides methods for writing the computed picture to the output file. Method firstOutput() writes the header information and formatOutput() performs the gamma correction, if necessary, and writes the color information. Method gammaCorrect() solves the problem that light output produced by the screen phosphor is not linearly proportional to the pixel’s given intensity value. Frequently, the relationship can be expressed by the exponential I = k  V γ ; k and γ constant where V is the intensity value set for each pixel. The correction of this non-linearity — the so-called Gamma Correction — can be achieved by solving the equation V

=

I 

1 γ

k

for V . For color monitors the gamma-correction has to be performed for all three primaries. Fortunately, most high-quality monitors provide automatic gamma-correction.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

5.2

12

Approximative Rendering

To apply a standard rendering algorithm (wireframe, flat, Gouraud, Phong) message approxShape() is sent to the scene causing all objects to create their faceted approximation. Depending on whether the information on the scene’s hierarchy shall be used for the rendering process or not (hierarchy versus flat list of objects) MRT offers a different set of utilities to traverse the scene. With the support of methods project() and diffuseCoefficient() CGI3D performs the viewing transformation and the color computations for each individual triangle. It’s worth mentioning that due to the architecture of MRT the use of message diffuseCoefficient() automatically provides texturing of approximated objects. Hidden surface elimination is currently based either on hardware or on the z-buffer algorithm. For lowmemory systems the scan-line z-buffer algorithm can be used. Images can be created either statically (just the display of the scene) or together with an interactor allowing the interactive manipulation of the camera parameters. Animations (fly-throughs) can be created by entering an extra file specifying the camera positions and orientations as well as the interpolation mechanism.

5.3

Radiosity

Global illumination is applied to the scene after discretizing all objects into polygonal meshes, which is done by message approxShape(). This mesh can be used as the coarsest level of a 2– or multi-level hierarchy typically needed for advanced radiosity algorithms like Progressive Refinement or Hierarchical Radiosity. Methods of the underlying winged edge data structure for boundary representation can be used to implement regular subdivision or discontinuity meshing. Finally color values are stored in all vertices, which enables CGI3D to render the scene without performing it’s own lighting calculation. A radiosity computation starts by calling radiositInit() to allow for proper initialization of radiosity data structures. The implementation of Hierarchical Radiosity for example starts by creating the top level hierarchy and performing the initial linking step. The main loop now calls radiosityStep() which must return true or false depending on the need for further iterations. A return value of false indicates completion of the current calculation. Method radisoityRender() serves as a visualization step to display the current solution. This is useful to observe the algorithm’s progression and gives a chance for user interaction.

5.4

Graphical User Interface

Commandline version packages providing user access to a varying number of visualization or modeling functions can be easily created by modifying a generic main() program and linking it with the according MRT librariers. In addition to that end users get comfortable control over the powerful visualization functionality by embedding MRT into PENGUIN [FF95], a portable environment for a graphical user interface. PENGUIN provides an easy way to customize the user interface of applications without the need for recompiling or even re-linking of the program. This is accomplished by a parser which reads in a description of the user interface and is embedded in each PENGUIN application. New user interface objects can easily be introduced by derivation from existing ones. This normally does not require a modification of the parser because objects can register themselves at run-time. To achieve easy scalability of all rendering aspects MRT under PENGUIN provides control of all available shading and illumination techniques, as well as 3D interaction for walkthroughs or animations. Interactive refinement of the polygonal approximation allows to improve the accuracy of the polygon renderer which can lead to Phong-like shading effects. In order to show the easy integration of MRT into 3rd party environments, an enhanced version of the MRT application has been built using the Microsoft Foundation Classes (MFC). This provides the user with the look and feel of a standard Microsoft Windows application and makes use of advanced user interface features like drag-and-drop and supports multiple windows in a single application.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

13

6 Introducing New Objects In order to customize the renderer the application programmer only needs to know the basic functionality of a geometric object. In our case the following pure virtual methods of base class t_Object have to be provided for each new object:5 boundingVolume() returns the bounding volume, intersect() computes the intersection between the object and a ray, surfaceNormal() computes the normal to the surface at a given point, and approxShapePQ() computes a faceted approximation of the object and stores it internally in a modified winged-edge data structure. Optionally, the following functions can be overloaded as well: checkIntersect() just performs an intersection test and has been introduced to speed up calculations of light rays as well as tests with bounding volumes. inside() returns the information if a given point is inside, on, or outside the object. The definition of this message is only necessary for objects allowed in CSG (constructive solid geometry) expressions. mapInvers() maps a point on the surface of that object onto a two-dimensional texture space defined over [0; 1]  [0; 1]. By default, this message returns false indicating that 2D texture mapping is not supported for this specific object. checkBoxIntersect() determines if the object has a non-empty intersection with an axes-aligned box. As the default implementation already provides this information based on the object’s bounding volume it needs only be redefined if the approximation with the bounding volume is too coarse (this is typically true for non-convex or skinny objects). If a sphere was not already implemented, it could be introduced by the following definition of class t_Sphere: class t_Sphere: public t_SurfaceObject { public: t_Sphere (const t_3DVector& center, t_Real radius, const t_SurfaceShaderPtr& surf_Id=NULL): virtual t_BVol boundingVolume () const; virtual bool intersect (t_Ray& ray); virtual bool surfaceNormal (const t_3DVector& point,t_3DVector& normal); // computes the surfacenormal to the given point // returns false if normal can not be computed virtual bool approxShapePQ (const t_BRepQuality& quality); // generates BRep according to given pseudo-quality private: t_3DVector centerpoint; t_Real rad;

// return bounding volume // computes intersection with ray

// object description

};

Even though the above class definition is sufficient to introduce the object, we might want to add CSG functionality and 2D texture mapping with the two public functions ... virtual t_Real inside (const t_3DVector& point) const; virtual bool mapInvers (const t_3DVector& point_xyz, t_2DVector& point_uv) const; ...

// return inside measure

};

The second and final step adds syntactic rules to the scene description grammar from which the parser of the scene description is generated: | ... | ... 5 We are employing the concept of pure functions to enforce the definition of these functions. Classes derived from t_Object which don’t overload these functions remain abstract classes and thus cannot be instantiated.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference |

14

SPHERE

sphere | ... | ...

number > [ $surfnr ] > [ $obj ]

sphere > [ t_SurfaceObjectPtr obj ] : > vector > [ pos ] /* center */ scalar > [ radius ] /* radius */ > ; These two steps are sufficient for any new object. No other parts of the MRT environment have to be studied or modified by the programmer. Even though most users of a rendering system will only be confronted with the task of introducing new objects our renderer will also allow the introduction of new types of light sources or new types of 2D and 3D surface textures. New types of light sources can be introduced by deriving a new class, say t_SpotLight, from base class t_Light and redefining the virtual function directLight(). A new texture, for example, could be incorporated by deriving a new class from base class t_SurfaceShader and overloading the definition of virtual method diffuseCoefficient(), which, by default, returns the diffuse reflection coefficient. For solid textures method diffuseCoefficient() will need a 3D point. For 2D textures the pointer to the object will be needed as well in order to perform the inverse mapping from 3D world coordinates into 2D texture space. The implementation of bump mapping can easily be accomplished by overloading virtual message perturbNormal(). Different techniques for ray acceleration like regular space subdivision or octree acceleration can be (and have been) implemented by only providing new methods intersect() and checkIntersect() to a class derived from t_Scene.

7 Distributed MRT Based on Remote Procedure Calls (RPC), MRT can exploit in parallel the computing power of a workstation cluster. Driven by a script MRT clients are launched on several machines and start asking a server for image portions which have to be ray-traced. The image to be rendered is divided into many small bands, more bands than there are processors. This way, if one band takes longer than the others, the processors with the easy pieces are not all waiting for the processor with the difficult piece to finish. In a last step the script copies all image portions to the final picture. Because RPC is widely available, this approach works with a combination of different workstation platforms.

8

Conclusions

In this paper we presented an object-oriented software architecture for a 3D rendering package which significantly improves the readability of the underlying algorithms, drastically improves productivity, and, most importantly, consists of building blocks that lend themselves to programmer customization thus making 3D image synthesis more accessible. Using the renderer as the platform for teaching and for lab assignments our experiences are consistent with [PW90] and they prove that the system meets its design goal of being highly customizable and extendable. This is supported by a recently started cooperation with a German mobile comunication network supplier. The development of a prototype package to simulate the distribution of radio waves in urban environments based on MRT could be completed by one of our students within two weeks. The incredibly short development based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

15

time (considering that we started from scratch) in combination with the fact that the prototype was significantly faster than what they had before made it fairly easy to attract external funding for this project.

References [Bau75] BAUMGART B. G.: A polyhedron representation for computer vision. In Nat. Comp. Conf. 44 (1975), AFIPS, pp. 589–596. [BFH95] B ENDELS H., F ELLNER D. W., H AVEMANN S.: Modellierung der Grundlagen — Erweiterbare Datenstrukturen zur Modellierung und Visualisierung polygonaler Welten. In Modeling – Virtual Worlds – Distributed Graphics. infix, 1995, pp. 149–158. [CW93] C OHEN M. F., WALLACE J. R.: Radiosity and Realistic Image Synthesis. Academic Press, Cambridge, MA, 1993. [ES90]

E LLIS M. A., S TROUSTRUP B.: The Annotated C++ Reference Manual. Addison-Wesley, Reading, Massachusetts, Dec. 1990.

[Fel92]

F ELLNER D. W.: Computer Grafik, 2 ed., vol. 58 of Reihe Informatik. B.I. Wissenschaftsverlag, Mannheim, 1992.

[Fel96]

F ELLNER D. W.: Extensible image synthesis. In Object-Oriented and Mixed Programming Paradigms, Wisskirchen P., (Ed.), Focus on Computer Graphics. Springer, 1996, pp. 7–21.

[FF95]

F ELLNER D. W., F ISCHER M.: PENGUIN – A Portable ENvironment for a Graphical User INterface. Tech. Rep. IAI-TR-95-8, University of Bonn, Dept. of Computer Science, Bonn, Germany, June 1995.

[FFW93] F ELLNER D. W., F ISCHER M., W EBER J.: CGI-3D – A 3D Graphics Interface. Tech. Rep. IAITR-95-x, University of Bonn, Dept. of Computer Science, Bonn, Germany, Aug. 1993. [Gla89]

G LASSNER A. S. (Ed.): An Introduction to Ray Tracing. Academic Press, London, 1989.

[ISO85] ISO: Information Processing Systems – Computer Graphics – Graphical Kernel System (GKS) – Functional Description, IS 7942, 1985. [ISO91] ISO: Information Processing Systems – Computer Graphics – Interfacing techniques for dialogues with graphical devices (CGI), Part 1-6, IS 9636, Dec. 1991. [ISO92] ISO: Information Processing Systems – Computer Graphics – Programmer’s Hierarchical Interactive Graphics System (PHIGS), Amendment 1–3, IS 9592/Am. 1-3, Feb. 1992. [Kub91] K UBOTA PACIFIC C OMPUTER : Doré Programmer’s Guide, 1991. [PW90]

P INSON L. J., W IENER R. S. (Eds.): Applications of Object-Oriented Programming. AddisonWesley, Reading, Massachusetts, 1990.

[SC92]

S TRAUSS P. S., C AREY R.: An object-oriented 3D graphics toolkit. Computer Graphics 26, 2 (July 1992), 341–349.

[Sch97]

S CHAEFER S.: Hierarchical radiosity on curved surfaces. In Rendering Techniques ’97 (Proceedings of the Eighth Eurographics Workshop on Rendering) (New York, NY, 1997), Dorsey J., Slusallek P., (Eds.), Springer Wien, pp. 187–192. ISBN 3-211-83001-4.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference [SGV94] S CHÖNHAGE A., G ROTEFELD A., V ETTER E.: Mannheim, 1994. [Sil93]

16 Fast Algorithms. B.I. Wissenschaftsverlag,

S ILICON G RAPHICS I NC .: The OpenGL Reference Manual – The Official Reference Document for OpenGL, 1 ed. Addison-Wesley, Reading, Mass., 1993.

[Sun93] S UN M ICROSYSTEMS: The Solaris XGL Graphics Library. Sun Microsystems Inc., Mountain View, CA, Feb. 1993. [WC91] W IEGAND G., C OVEY B.: HOOPS Reference Manual. Ithaca Software, 1991. [Whi80] W HITTED T.: An improved illumination model for shaded display. Commun. ACM 23, 6 (June 1980), 343–349. [Wiß90] W ISSKIRCHEN P.: Object-Oriented Graphics: From GKS and PHIGS to Object-Oriented Systems. Springer, Berlin, 1990.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

17

A Scene Description Grammar The elements of the scene description file have to satisfy the following grammar. Reserved keywords (terminal symbols) are written in upper-case (exception: INT, REAL, STRING, and FILENAME stand for the according values), non terminal symbols are written in lower case. /* * - sequences of transformation matrices (e.g. defined by a sequence * of BEGIN ... END ) update the accumulated transformation matrix * ’acc_tr_matrix’ by prepending themselves: * acc_tr_matrix = matrix * acc_tr_matrix * Thus, in case of nested transformations, the innermost transformation * is applied first. * - matrix expressions like m1 * m2 * m3 define a sequence of * 3D transformations with the leftmost matrix applied first */ #header #token REFRACTION #token DIFFUSE_TRANS #token SPECULAR_TRANS #token EMISSIVE #token IMPORTANCE #token SQR " (SQR) #token SQRT " (SQRT) #token EXP " (EXP) #token POW " (POW) #token COS " (COS) #token SIN " (SIN) #token TAN " (TAN) #token AMB_LIGHT #token AEPS #token ATT_POS_LIGHT #token ATT_SPOT_LIGHT #token BACKGROUND #token BEGIN #token BEZIER #token BFEPS // brepobj #token BREPOBJ #token BOX #token BUMP #token CAMERA #token IBUMP #token RAND_BUMP #token RAND_IBUMP #token SINE_BUMP #token PNM_BUMP #token PNM_IBUMP //#token CHAR_3D #token COLORRANGE #token CONE #token CORONA #token CSG_DIFF #token CSG_INTER #token CSG_UNION #token CSPHERE #token CYLINDER_COAT #token DEF

" " " " "

" " " " " " " " " " " " " " " " " " " " " " " " " " "

(f) | (F) | (refraction) " (dt) | (DT) | (diff_trans) " (st) | (ST) | (spec_trans) " (em) | (EM) | (emissive) " (imp) | (IMP) | (importance) " | (sqr) " | (sqrt) " | (exp) " | (pow) " | (cos) " | (sin) " | (tan) " (amb_light) | (AMB_LIGHT) " (aeps) | (AEPS) " (att_pos_light) | (ATT_POS_LIGHT) " (att_spot_light) | (ATT_SPOT_LIGHT) " (background) | (BACKGROUND) " (begin) | (BEGIN) " (bezier) | (BEZIER) " (bfeps) | (BFEPS) " (brepobj) (box) (bump) (camera) (ibump) (rand_bump) (rand_ibump) (sine_bump) (pnm_bump) (pnm_ibump) " (char) (colorrange) (cone) (corona) (csg_diff) (csg_inter) (csg_union) (csphere) (cylinder_coat) (def)

based on Fellner D.: Extensible Image Synthesis [Fel96]

| | | | | | | | | | | | | | | | | | |

(BREPOBJ) " (BOX) " (BUMP) " (CAMERA) " (IBUMP) " (RAND_BUMP) " (RAND_IBUMP) " (SINE_BUMP) " (PNM_BUMP) " (PNM_IBUMP) " | (CHAR) " (COLORRANGE) " (CONE) " (CORONA) " (CSG_DIFF) " (CSG_INTER) " (CSG_UNION ) " (CSPHERE) " (CYLINDER_COAT) " (DEF) "

MRT++ – Design Issues and Brief Reference #token DISC " (disc) | (DISC) " #token DIST_LIGHT " (dist_light) | (DIST_LIGHT) " #token ATT_DIST_LIGHT " (att_dist_light) | (ATT_DIST_LIGHT) " #token DIST_SPOT_LIGHT " (dist_spot_light) | (DIST_SPOT_LIGHT) " #token ATT_DIST_SPOT_LIGHT " (att_dist_spot_light) | (ATT_DIST_SPOT_LIGHT) " #token ELLIPSOID " (ellipsoid) | (ELLIPSOID) " #token EMBALL " (emball) | (EMBALL) " #token END " (end) | (END) " #token ENDDEF " (enddef) | (ENDDEF) " #token ENDFILE " (endfile) | (ENDFILE) " #token EXT " (ext) | (EXT) " #token EYEP " (eyep) | (EYEP) " #token FEPS " (feps) | (FEPS) " #token T_FLOAT " (float) | (FLOAT) " #token FOV " (fov) | (FOV) " #token GAMMACOLOR " (gammacolor) | (GAMMA) " #token GEN_CYLINDER " (gen_cylinder) | (GEN_CYLINDER) " #token HEIGHTFIELD " (heightfield) | (HEIGHTFIELD) " #token LOOKP " (lookp) | (LOOKP) " #token MARBLE " (marble) | (MARBLE) " #token MATRIX " (matrix) | (MATRIX) " #token MATRIX4X3 " (matrix4x3) | (MATRIX4X3) " #token MAXLEVEL " (maxlevel) | (MAXLEVEL) " #token MAXEYELEVEL " (maxeyelevel) | (MAXEYELEVEL) " #token MAXLIGHTLEVEL " (maxlightlevel) | (MAXLIGHTLEVEL) " #token METABALL " (metaball) | (METABALL) " #token NOISE " (noise) | (NOISE) " #token OBJECT " (object) | (OBJECT) " #token OUTFORMAT " (outformat) | (OUTFORMAT) " #token PARALLEL " (parallel) | (PARALLEL) " #token PHONG " (phong) | (PHONG) " #token POS_LIGHT " (pos_light) | (POS_LIGHT) " #token REF " (ref) | (REF) | (use) | (USE) " #token QUADRANGLE " (quadrangle) | (QUADRANGLE) " #token ROT_X " (rot_x) | (ROT_X) " #token ROT_Y " (rot_y) | (ROT_Y) " #token ROT_Z " (rot_z) | (ROT_Z) " #token SCALE " (scale) | (SCALE) " #token SCANNEROBJECT " (scannerobject) | (SCANNEROBJECT) " #token SCREEN " (screen) | (SCREEN) " #token SMBALL " (smball) | (SMBALL) " #token SPHERE " (sphere) | (SPHERE) " #token SPOT_LIGHT " (spot_light) | (SPOT_LIGHT) " #token SR_BEZIER " (sr_bezier) | (SR_BEZIER) " #token SR_POLYLINE " (sr_polyline) | (SR_POLYLINE) " #token SR_TORUS " (sr_torus) | (SR_TORUS) " #token STEXTURE " (stexture) | (STEXTURE) " #token STRING_TTF3D " (string_ttf3d) | (STRING_TTF3D) " //#token STRING_3D " (string) | (STRING) " #token SUPERQ " (superq) | (SUPERQ) " #token SRFSHADER " (srfshader) | (surface) | (SRFSHADER) | (SURFACE) " #token TETRAHEDRON " (tetrahedron) | (TETRAHEDRON) " #token TEXTURE_2D " (texture_2d) | (TEXTURE_2D) " #token TEXTURE_2D_TRANS " (texture_2d_trans) | (TEXTURE_2D_TRANS) " #token TEXTURE_2D24BIT " (texture_2d24bit) | (TEXTURE_2D24BIT) " #token TEXTURE_2D24BIT_TRANS " (texture_2d24bit_trans) | (TEXTURE_2D24BIT_TRANS) " #token TIME " (time) | (TIME) "

based on Fellner D.: Extensible Image Synthesis [Fel96]

18

MRT++ – Design Issues and Brief Reference #token TORUS " (torus) | (TORUS) " #token TRANS " (trans) | (TRANS) " #token TRIANGLE " (triangle) | (TRIANGLE) " #token TRIANGLESTRIP " (trianglestrip) | (TRIANGLESTRIP) " #token TURBULENCE " (turbulence) | (TURBULENCE) " #token UP " (up) | (UP) " #token VECTOR " (vector) | (VECTOR) " #token WOOD " (wood) | (WOOD) " #token ISOSURFACE " (isosurface) | (ISOSURFACE) " #token BLINN_VOLUME " (blinn_volume) | (BLINN_VOLUME) " #token LEVOY_VOLUME " (levoy_volume) | (LEVOY_VOLUME) " #token VOLDATA " (voldata) | (VOLDATA) " #token REGULAR_GRID " (regular_grid) | (REGULAR_GRID) " #token SCALAR_OPACITY_RAMP " (scalar_opacity_ramp) | (SCALAR_OPACITY_RAMP) " #token GRADMAG_OPACITY_RAMP " (gradmag_opacity_ramp) | (GRADMAG_OPACITY_RAMP) " #token SCALAR_COLOR_RAMP " (scalar_color_ramp) | (SCALAR_COLOR_RAMP) " #token AMBIENT " (ambient) | (AMBIENT) " #token DIFFUSE " (diffuse) | (DIFFUSE) " #token PHASE_FUNCTION " (phase_function) | (PHASE_FUNCTION) " #token OPTICAL_DEPTH " (optical_depth) | (OPTICAL_DEPTH) " #token ATTENUATION " (attenuation) | (ATTENUATION) " #token STEPSIZE " (stepsize) | (STEPSIZE) " #token SAMPLE_EXACT " (sample_exact) | (SAMPLE_EXACT) " #token VOLDATA_NO " (voldata_no) | (VOLDATA_NO) " #token CUTREGION " (cutregion) | (CUTREGION) " #token OUTFILE " (outfile) | (OUTFILE) " #token INCLUDE " (include) | (INCLUDE) " #token T_INT "[0-9]+" #token REAL "([0-9]+.[0-9]* | [0-9]*.[0-9]+ | [0-9]+) {[eE]{[\-\+]}[0-9]+}" #token VARNAME "[a-zA-Z][a-zA-Z0-9]*" #token Eof "@" #token "/\*" #token "\"" #token "[\t\ ]+" #token "[\n\r]" #token "// ~[\n]* \n" #lexclass NAME #token "[\t\ ]+" #token "[\n\r]" #token "// ~[\n]* \n" #token FILENAME "{/}[a-zA-Z0-9_.\-]+(/[a-zA-Z0-9_.\-]+)*" #lexclass COMMENT #token "[\n\r]" #token "\*/" #token "\*\*/" #token "\*~[/]" #token "~[\*\n\r]+" #lexclass STRINGS #token STRING "\"" #token "\\n" #token "\\t" #token "\\v" #token "\\b" #token "\\r" #token "\\f" #token "\\a" #token "\\\\" #token "\\?" #token "\\’" #token "\\\"" #token "\\0[0-7]*" #token "\\[1-9][0-9]*" #token "\\(0x|0X)[0-9a-fA-F]+" based on Fellner D.: Extensible Image Synthesis [Fel96]

19

MRT++ – Design Issues and Brief Reference #token "[\n\r]" #token "~[\"\n\r\\]+" #lexclass START class t_MSDPar { /* * MSD Grammar starts here */ file : ( vardecl )* ( global )* ( screen )* ( light )* ( srf_shader | vol_data { ENDFILE } Eof ; vardecl : T_FLOAT floatdecl ( "," | VECTOR vecdecl ( "," | MATRIX matdecl ( "," | OBJECT objdecl ( "," | CAMERA camdecl ( "," ; floatdecl : var > [ name ] "=" number > [ val ] ; vecdecl : var > [ name ] "=" triple > [ val ] ; matdecl : var > [ name ] "=" matrix > [ m ] ; objdecl : var > [ name ] "=" transf_obj ; camdecl: var > [ name ] "=" "\(" vector > [ eyep ] vector > [ lookp ] vector > [ upv ] number > [ hfov ] number > [ vfov ] "\)" ; var > : | | | | ;

[ char fvar > vvar > mvar > ovar > uvar >

*ret ] [ $ret [ $ret [ $ret [ $ret [ $ret

20

| transf_obj )+

floatdecl vecdecl matdecl objdecl camdecl

)* )* )* )* )*

";" ";" ";" ";" ";"

] ] ] ] ]

fvar > [ char *ret ] : integer > [i] | COLORRANGE integer > [i] integer > [i]

/* RGB in [0,COLORRANGE] default 255 */

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

|

|

|

|

21

integer > [i] integer > [i] number > [r] number > [r] number > [r] "" color > [ amb ] POS_LIGHT /* positional light: color,position */ color > [ col ] vector > [ pos ] color > [ col ] vector > [ pos ] direction > [ dir ] number > [ aperture ] color > [ col ] vector > [ pos ] color > [ col ] vector > [ pos ] direction > [ dir ] number > [ aperture ] color > [ col ] /* color */ vector > [ pos ] /* position */ ";" number > [ radius ] /* light source radius */ integer> [ minSubDiv ] /* minimum number of subdivisions ( [ maxSubDiv ] /* maximum number of subdivisions (> 0) */ ATT_DIST_LIGHT /* attenuated distributed light */ color > [ col ] /* color */ vector > [ pos ] /* position */ ";" number > [ radius ] /* light source radius */ "," number > [ intensDiffLimit ] /* intensity difference limit for subdivision */ integer> [ minSubDiv ] /* minimum number of subdivisions ( [ maxSubDiv ] /* maximum number of subdivisions (> 0) */ DIST_SPOT_LIGHT /* distributed spot light */ color > [ col ] /* color */ vector > [ pos ] /* position */ direction > [ dir ] /* direction */ number > [ aperture ] /* cone aperture */ ";" number > [ radius ] /* light source radius */ integer> [ minSubDiv ] /* minimum number of subdivisions ( [ maxSubDiv ] /* maximum number of subdivisions (> 0) */ ATT_DIST_SPOT_LIGHT /* attenuated distributed spot light */ color > [ col ] /* color */ vector > [ pos ] /* position */ direction > [ dir ] /* direction */ number > [ aperture ] /* cone aperture */ ";" number > [ radius ] /* light source radius */ "," number > [ intensDiffLimit ] /* intensity difference limit for subdivision */ integer> [ minSubDiv ] /* minimum number of subdivisions ( [ maxSubDiv ] /* maximum number of subdivisions (> 0) */

; srf_shader : SRFSHADER /* surface shader */ integer > [ shadernr ] /* shader number must be != 0 */ surftypes > [ srfshd ] /* shader type, see below */ ; surftypes > [ t_SurfaceShaderPtr srfshd ] : { PHONG } /* default: phong shader */ phong > [ $srfshd ]

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference | | | | | | | | tors | | ents | |

TEXTURE_2D texture_2d > [ TEXTURE_2D_TRANS texture_2d_trans > [ TEXTURE_2D24BIT texture_2d24bit > [ TEXTURE_2D24BIT_TRANS texture_2d24bit_trans STEXTURE solid_texture > [ BUMP bump > [ IBUMP ibump > [ RAND_BUMP */ rand_bump > [ RAND_IBUMP rand_ibump > [ PNM_BUMP in PNM file */ pnm_bump > [ PNM_IBUMP pnm_ibump > [ SINE_BUMP sine_bump > [

22 /* 2D texture */

$srfshd ] /* 2D transparent texture */ $srfshd ] /* 2D true color texture */ $srfshd ] /* > [ $srfshd ] /* $srfshd ] /* $srfshd ] /* $srfshd ] /*

2D transparent true color texture */ solid (3D) texture */ bump map, reads perturbation vectors from file */ bump map w. interpolated perturbation vectors */ bump map, generates random perturbation vec-

$srfshd ] /* bump map, RAND_BUMP with interpolation */ $srfshd ] /* perturbation generated from intensity gradi$srfshd ] /* PNM_BUMP with interpolation */ $srfshd ] /* sinusoidal prturbation */ $srfshd ]

; phong > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ( ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ | ( REFRACTION number > [ findex ] | DIFFUSE_TRANS color > [ dt ] /* diffuse transmission coefficient*/ | SPECULAR_TRANS color > [ st ] /* specular transmission coefficient*/ | EMISSIVE color > [ emissive ] /* emissive color */ | IMPORTANCE number > [ imp ] /* importance f. radiosity */ )* ) ; texture_2d > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference number > [ findex ] ";" string > [ fn ] { "" }

23

/* index of refraction = findex */ /* the imagefile - only .ppm supported */ /* horizontal replication, default:1 */ /* vertical replication, default:1

*/

; texture_2d_trans > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" string > [ fn ] /* the imagefile - only .ppm supported */ color > [ back ] /* background color */ number > [ alpha ] /* alpha channel */ { "" } ; texture_2d24bit > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" string > [ fn ] /* the imagefile - only .ppm supported */ { "" } ; texture_2d24bit_trans > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference string > [ fn ] string > [ alpha ] { "" }

/* the imagefile - only .ppm supported */ /* alpha channel file (PGM only) */ /* horizontal replication, default 1 */ /* horizontal replication, default 1 */

; solid_texture > [ t_SurfaceShaderPtr srfshd ] : CSPHERE /* solid texture: concentric spheres */ st_csphere > [ $srfshd ] | WOOD /* solid texture: wood */ st_wood > [ $srfshd ] | NOISE /* solid texture: noise */ st_noise > [ $srfshd ] | TURBULENCE st_turbulence > [ $srfshd ] /* solid texture: turbulence function */ | MARBLE st_marble > [ $srfshd ] /* solid texture: marble */ | CORONA st_corona > [ $srfshd ] /* solid texture: corona */ ; st_csphere > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" vector > [ center ] /* center of pattern */ scalar > [ bandwidth ] /* width of circle */ color > [ centerColor ] /* color of the center */ color > [ bandColor ] /* color of the band */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ; st_wood > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" vector > [ center ] /* center of pattern */ scalar > [ dist ] /* distance between cylinders */ "," scalar > [ width ] /* width of cylinders */ color > [ light ] /* light color */ color > [ dark ] /* dark color */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ; st_noise > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" scalar > [ scale ] /* texture scale factor */ color > [ minimum ] /* color range minimum */ color > [ maximum ] /* color range maximum */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," based on Fellner D.: Extensible Image Synthesis [Fel96]

24

MRT++ – Design Issues and Brief Reference number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ; st_turbulence > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" scalar > [ scale ] /* texture scale factor */ color > [ minimum ] /* color range minimum */ color > [ maximum ] /* color range maximum */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ; st_marble > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ base ] /* base color */ color > [ vein ] /* vein color */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ; st_corona > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" vector > [ center ] /* center of corona */ scalar > [ radius ] /* corona radius */ ";" color > [ inner ] /* corona inner color */ color > [ outer ] /* corona outer color */ scalar > [ turb ] /* color turbulence (>=0.0) 0.0-no, >1.0-high */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ; rand_bump > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" number > [ ampl ] /* perturbed normal amplitude */ "," number > [ distrib ] /* perturbed normal distribution [0..1] */ "" "" { integer>

[ ysize ] [ xrepl ]

/* Replication in X and Y */

[ yrepl ]

[ seed ] } /* random seed */ ; rand_ibump > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" number > [ ampl ] /* perturbed normal amplitude */ "," number > [ distrib ] /* perturbed normal distribution [0..1]*/ "" "" { integer> [ seed ] } /* random seed */ ; sine_bump > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" ( number > [ amplitude ] /* normal perturbation amplitude */ "," number > [ frequency ] /* frequency */ "," number > [ damping ] /* damping factor */ "" )+ ";" ; bump > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";"

based on Fellner D.: Extensible Image Synthesis [Fel96]

26

MRT++ – Design Issues and Brief Reference color number "," number ";" coltyp number ";" string ""

> [ specular ] > [ ks ]

/* specular surface color */ /* specular reflection coefficient - ks */

> [ c ]

/* specular exponent - c */

> [ kt ] > [ findex ]

/* transmission coefficient - kt */ /* index of refraction = findex */

> [ fn ]

/* Bump map file */

> [ repX ]

/* Replication in X */

> [ repY ]

/* Replication in Y */

; ibump > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" string > [ fn ] /* Bump map file */ "" ; pnm_bump > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" string > [ fn ] /* PNM file */ number > [ ampl ] /* amplitude */ "" ; pnm_ibump > [ t_SurfaceShaderPtr srfshd ] : number > [ ka ] /* ambient coefficient - ka */ ";" color > [ diffuse ] /* diffuse surface color */ number > [ kd ] /* diffuse reflection coefficient - kd */ ";" color > [ specular ] /* specular surface color */ number > [ ks ] /* specular reflection coefficient - ks */ "," number > [ c ] /* specular exponent - c */ ";" coltyp > [ kt ] /* transmission coefficient - kt */ number > [ findex ] /* index of refraction = findex */ ";" string > [ fn ] /* PNM file */ based on Fellner D.: Extensible Image Synthesis [Fel96]

27

MRT++ – Design Issues and Brief Reference

;

number > [ ampl ] ""

transf_obj :

28

/* amplitude */ /* Replication in X */ /* Replication in Y */

{ matrix > [ m ] } object[m]

; object[t_4x3Matrix* m] : ( object_with_surface > [ obj , ( vardecl )* ( srf_shader | transf_obj )+ END | INCLUDE FILENAME ovar > [ name ] { number > [ shdrnr ] } ";" | EXT var > [ extname ] ovar > [ name ] { number > [ shdrnr ] } ";" ) ;

shdrnr ] /* end sub scene */ /* include scene description */ /* msd filename */

/* external reference */

/* * all surface objects */ object_with_surface > [t_SurfaceObjectPtr obj,int surfnr] : CSG_UNION /* csg union number > [ $surfnr ] csg_union > [ $obj ] | CSG_DIFF /* csg difference number > [ $surfnr ] csg_diff > [ $obj ] | CSG_INTER /* csg intersection number > [ $surfnr ] csg_inter > [ $obj ] | BEZIER /* bicubic Bezier patch number > [ $surfnr ] bezier > [ $obj ] | BOX /* rectangular box number > [ $surfnr ] box > [ $obj ] | BREPOBJ /* read brepobj from .obj file number > [ $surfnr ] brepobj > [ $obj ] | CONE /* circular cone number > [ $surfnr ] cone > [ $obj ] | DISC number > [ $surfnr ] /* circular disc disc > [ $obj ] | ELLIPSOID /* ellipsoid number > [ $surfnr ] ellipsoid > [ $obj ] | GEN_CYLINDER /* general cylinder

based on Fellner D.: Extensible Image Synthesis [Fel96]

*/ */ */ */ */ */ */ */ */ */

MRT++ – Design Issues and Brief Reference number > [ $surfnr ] gen_cylinder > [ $obj ] | HEIGHTFIELD /* number > [ $surfnr ] heightfield > [ $obj ] | CYLINDER_COAT /* number > [ $surfnr ] cylinder_coat > [ $obj ] | METABALL /* number > [ $surfnr ] metaball > [ $obj ] | PARALLEL /* number > [ $surfnr ] parallel > [ $obj ] | QUADRANGLE /* number > [ $surfnr ] quadrangle > [ $obj ] | SPHERE /* number > [ $surfnr ] sphere > [ $obj ] | SR_BEZIER /* number > [ $surfnr ] sr_bezier > [ $obj ] | SR_POLYLINE /* number > [ $surfnr ] sr_polyline > [ $obj ] | SR_TORUS /* number > [ $surfnr ] sr_torus > [ $obj ] | STRING_TTF3D /* number > [ $surfnr ] string_ttf3d > [ $obj ] | SUPERQ /* number > [ $surfnr ] superq > [ $obj ] | TETRAHEDRON /* number > [ $surfnr ] tetrahedron > [ $obj ] | TORUS /* number > [ $surfnr ] torus > [ $obj ] | TRIANGLE /* number > [ $surfnr ] triangle > [ $obj ] | SCANNEROBJECT /* number > [ $surfnr ] scannerobject > [ $obj ] | ISOSURFACE /* number > [ $surfnr ] isosurface > [ $obj ] ; /* * objects that may appear in csg expressions * (see last section for object descriptions) */ csgobj > [ t_SurfaceObjectPtr obj ] : CSG_UNION csg_union > [ $obj ] | CSG_DIFF

based on Fellner D.: Extensible Image Synthesis [Fel96]

29

height field

*/

cylinder coat

*/

implicit object (blob)

*/

parallelogram

*/

quadrangle

*/

sphere

*/

SOR Bezier curve

*/

SOR polyline

*/

SOR torus

*/

3D true type string

*/

general superquadric

*/

tetrahedron

*/

torus

*/

triangle

*/

Cyberware scanner data

*/

iso surface from

volume data */

MRT++ – Design Issues and Brief Reference

| | | | | | | | | | | | | | |

csg_diff CSG_INTER csg_inter BOX box CONE cone ELLIPSOID ellipsoid GEN_CYLINDER gen_cylinder CYLINDER_COAT cylinder_coat METABALL metaball SPHERE sphere SUPERQ superq SR_BEZIER sr_bezier SR_POLYLINE sr_polyline SR_TORUS sr_torus TETRAHEDRON tetrahedron TORUS torus ISOSURFACE isosurface

> [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ] > [ $obj ]

> [ $obj ] ; transf_csgobj > [ t_SurfaceObjectPtr obj ] : csgobj > [ $obj ] | matrix > [ m ] csgobj > [ $obj ] ; csg_diff > [ t_SurfaceObjectPtr obj ] : transf_csgobj > [ p1 ] transf_csgobj > [ p2 ] ; csg_inter > [ t_SurfaceObjectPtr obj ] : "\{" ( transf_csgobj > [ csgObj ] )+ "\}" | transf_csgobj > [ p1 ] transf_csgobj > [ p2 ] ; csg_union > [ t_SurfaceObjectPtr obj ] : "\{" ( transf_csgobj > [ csgObj ] )+ "\}" based on Fellner D.: Extensible Image Synthesis [Fel96]

30

MRT++ – Design Issues and Brief Reference | transf_csgobj > [ p1 ] transf_csgobj > [ p2 ] ; bezier > [ t_SurfaceObjectPtr obj ] : vector > [ v[ 0] ] /* 4x4 points */ vector > [ v[ 1] ] vector > [ v[ 2] ] vector > [ v[ 3] ] vector > [ v[ 4] ] vector > [ v[ 5] ] vector > [ v[ 6] ] vector > [ v[ 7] ] vector > [ v[ 8] ] vector > [ v[ 9] ] vector > [ v[10] ] vector > [ v[11] ] vector > [ v[12] ] vector > [ v[13] ] vector > [ v[14] ] vector > [ v[15] ] ; heightfield > [ t_SurfaceObjectPtr obj ] : vector > [ fieldMid ] /* midpoint of height field */ vector > [ extension ] /* extension in XYZ */ number > [ approxVertices ] /* number of vertices in reduced graph */ number > [ approxMaxError ] /* measure of approximation quality */ number > [ useNormalTriangles ] /* 1: use t_NormalTriangles 0:t_Triangle*/ string > [ stmFilename ] /* ’simple terrain model’ file */ ; box > [ t_SurfaceObjectPtr obj ] : triple > [ center ] /* center */ "" ; brepobj > [ t_SurfaceObjectPtr obj ] : string > [ objFilename ] /* .obj-Format brep file */ ; cone > [ t_SurfaceObjectPtr obj ] : triple > [ basis ] /* center of basis */ triple > [ top ] /* top of cone, != center of basis */ number > [ radius ] /* radius (>0 !!) */ ; disc > [ t_SurfaceObjectPtr obj ] : triple > [ center ] /* center */ triple > [ normal ] /* normal */ number > [ radius ] /* radius */ ; ellipsoid > [ t_SurfaceObjectPtr obj ] : triple > [ center ] /* center */ "" ; gen_cylinder > [ t_SurfaceObjectPtr obj ] /* generalized cylinder */ : triple > [ bottom ] /* center of bottom and top */ based on Fellner D.: Extensible Image Synthesis [Fel96]

31

MRT++ – Design Issues and Brief Reference triple > [ top ] ""; cylinder_coat > [ t_SurfaceObjectPtr obj ] /* cylinder coat */ : triple > [ bottom ] /* center of bottom and top */ triple > [ top ] ""; metaball > [ t_SurfaceObjectPtr obj ] /* volume density functions */ : "" added_mball > [ mball ] /* another meta ball */ ; added_mball > [ t_MetaballPtr mball ] /* mball that is added to first mball */ : matrix_mball > [ m1 ] ( "\+" matrix_mball > [ m2 ] )* ; matrix_mball > [ t_MetaballPtr mball ] : matrix > [ m ] simple_mball > [ $mball ] | simple_mball > [ $mball ] ; simple_mball > [ t_MetaballPtr mball ] : SMBALL /* spherical metaball */ smball > [ $mball ] | EMBALL /* elliptical metaball */ emball > [ $mball ] | BEGIN added_mball > [ $mball ] END ; smball > [ t_MetaballPtr mball ] /* spherical metaball */ : vector > [ center ] /* center */ "" number > [ density ] /* density threshold */ ";" ; emball > [ t_MetaballPtr mball ] /* elliptical metaball */ : triple > [ center ] /* center */ "" number > [ density ] ";" ; parallel > [ t_SurfaceObjectPtr obj ] /* parallelogramm */ : vector > [ p1 ] /* starting pont */ vector > [ p2 ] /* endpoint of first spanning vector */ vector > [ p3 ] /* endpoint of second spanning vector */ /* vectors ordered counter-clockwise */ ;

based on Fellner D.: Extensible Image Synthesis [Fel96]

32

MRT++ – Design Issues and Brief Reference quadrangle > [ t_SurfaceObjectPtr obj ] /* quadrangle */ : vector > [ p1 ] /* corners ordered counter-clockwise */ vector > [ p2 ] vector > [ p3 ] vector > [ p4 ] ; scannerobject > [ t_SurfaceObjectPtr obj ] /* Cyberware scanner data object */ : string > [ fn ] /* filename */ "" { "" } ";" ; sphere > [ t_SurfaceObjectPtr obj ] : vector > [ pos ] /* center */ scalar > [ radius ] /* radius */ ; sr_bezier > [ t_SurfaceObjectPtr obj ] : /* Solid Of Revolution: */ /* 2D-Bezier control points in xy_plane */ /* x_values must be increasing */ /* y_values must be positive */ /* y_value of first or last point may be 0 */ ( "\(" number > [ x ] "," number > [ y ] "\)" )+ ";" ; sr_polyline > [ t_SurfaceObjectPtr obj ] : /* Solid Of Revolution: */ /* 2D-Bezier control points in xy_plane */ /* x_values must be increasing */ /* y_values must be positive */ /* y_value of first or last point may be 0 */ ( "\(" number > [ x ] "," number > [ y ] "\)" )+ ";" ; sr_torus > [ t_SurfaceObjectPtr obj ] : triple > [ center ] /* center */ "" ; string_ttf3d : string string number

> > > >

[ [ [ [

t_SurfaceObjectPtr obj ] s ] /* string to be rendered */ fontName ] /* TTF fontname */ quality ] /* outline precision */

based on Fellner D.: Extensible Image Synthesis [Fel96]

33

MRT++ – Design Issues and Brief Reference ; superq > [ t_SurfaceObjectPtr obj ] /* general superquadric */ : triple > [ center ] /* center */ "" "" ; tetrahedron > [ t_SurfaceObjectPtr obj ] : vector > [ v1 ] /* 4 corners */ vector > [ v2 ] vector > [ v3 ] vector > [ v4 ] ; torus > [ t_SurfaceObjectPtr obj ] : triple > [ center ] /* center */ "" ; triangle > [ t_SurfaceObjectPtr obj ] : vector > [ v1 ] /* corners ordered counter-clockwise */ vector > [ v2 ] vector > [ v3 ] ; isosurface > [ t_SurfaceObjectPtr obj ] : integer > [ voldatanr ] "" ; vol_object > [ t_VolumeObjectPtr obj ] /* volume objects */ : BLINN_VOLUME blinn_volume > [ $obj ] | LEVOY_VOLUME levoy_volume > [ $obj ] ; blinn_volume > [ t_VolumeObjectPtr obj ] : ( VOLDATA_NO integer > [ voldatanr ] | CUTREGION transf_csgobj > [ cutRegion ] | STEPSIZE number > [ stepSize ] | SAMPLE_EXACT integer > [ sampleExact ] | OPTICAL_DEPTH number > [ optDepth ] | ATTENUATION integer > [ attu ] | AMBIENT based on Fellner D.: Extensible Image Synthesis [Fel96]

34

MRT++ – Design Issues and Brief Reference

| | |

number > [ DIFFUSE number > [ PHASE_FUNCTION integer > [ SCALAR_COLOR_RAMP vtflinear_c > [

ambient ] diffuse ] pf ] sc_ramp ]

)*

; levoy_volume > [ t_VolumeObjectPtr obj ] : ( VOLDATA_NO integer > [ voldatanr ] | CUTREGION transf_csgobj > [ cutRegion ] | STEPSIZE number > [ stepSize ] | SAMPLE_EXACT integer > [ sampleExact ] | ATTENUATION integer > [ attu ] | AMBIENT number > [ ambient ] | DIFFUSE number > [ diffuse ] | SCALAR_OPACITY_RAMP vtflinear_r > [ so_ramp ] | GRADMAG_OPACITY_RAMP vtflinear_r > [ go_ramp ] | SCALAR_COLOR_RAMP vtflinear_c > [ sc_ramp ] )* ; vtflinear_r > [ t_VTFlinear ret ] : ( "\(" number > [ x ] "," number > [ y ] "\)" )+ ( "\(" number > [ r ] "," color > [ c ] "\)" )+ VOLDATA integer > [ voldatanr ] vdtypes > [ voldata ] ; vdtypes > [ t_VolumeDataPtr voldata ] : NOISE vd_noise > [ $voldata ] | TURBULENCE vd_turbulence > [ $voldata ] | REGULAR_GRID vd_regular_grid > [ $voldata ] ; vd_noise > [ t_VolumeDataPtr voldata ] : number > [ scale ] ; vd_turbulence > [ t_VolumeDataPtr voldata ] : based on Fellner D.: Extensible Image Synthesis [Fel96]

35

MRT++ – Design Issues and Brief Reference

number > [ scale ] number > [ maxError ]

; vd_regular_grid > [ t_VolumeDataPtr voldata ] : integer > [ x_res ] integer > [ y_res ] integer > [ z_res ] vdtypes > [ source ] | string > [ fn ] ; /* * types */ matrix > [ t_4x3Matrix *m ] : simple_matrix > [ m1 ] ( "\*" simple_matrix > [ m2 ] )* ; simple_matrix > [ t_4x3Matrix *m] : MATRIX4X3 "\(" number > [ v[0] ] "," number > [ v[1] ] "," number > [ v[2] ] "," number > [ v[3] ] "," number > [ v[4] ] "," number > [ v[5] ] "," number > [ v[6] ] "," number > [ v[7] ] "," number > [ v[8] ] "," number > [ v[9] ] "," number > [ v[10] ] "," number > [ v[11] ] "\)" | |

|

mvar > TRANS "\(" number "," number "," number "\)" SCALE "\(" number "," number "," number "\)"

[ name ] /* translation by (x,y,z) */ > [ x ] > [ y ] > [ z ] /* scaling by (x,y,z) */ > [ x ] > [ y ] > [ z ]

based on Fellner D.: Extensible Image Synthesis [Fel96]

36

MRT++ – Design Issues and Brief Reference |

|

|

|

ROT_X "\(" number "\)" ROT_Y "\(" number "\)" ROT_Z "\(" number "\)" "\(" matrix "\)"

/* rotation around x-axis */ > [ a ]

/* angle in degree */ /* rotation around y-axis */

> [ a ]

/* angle in degree */ /* rotation around z-axis */

> [ a ]

/* angle in degree */

> [ $m ]

; vector > [ t_3DVector ret ] : triple > [ $ret ] | vvar > [ name ] ; direction > [ t_3DVector ret ] : triple > [ $ret ] | vvar > [ name ] ; triple > [ t_3DVector ret ] : "\(" number > [x] "," number > [y] "," number > [z] "\)" ; coltyp > [ t_Color ret ] /* r,g,b color type */ : "\(" number > [r] "," number > [g] "," number > [b] "\)" ; color > [ t_Color ret ] /* colortype with range checking */ : "\(" number > [r] "," number > [g] "," number > [b] "\)" ; scalar > [ t_Real ret ] : number > [ $ret ] ; number > [ t_Real ret ] : added_number > [ $ret ] ( "\+" added_number > [ a ] | "\-" added_number > [ a ] )* ;

based on Fellner D.: Extensible Image Synthesis [Fel96]

37

MRT++ – Design Issues and Brief Reference

38

added_number > [ t_Real ret ] : simple_number > [ $ret ] ( "\*" simple_number > [ a ] | "\/" simple_number > [ a ] )* ; simple_number > [ t_Real ret ] : T_INT fvar > [ name ] | "\+" simple_number > [ $ret ] | "\-" simple_number > [ $ret ] | "\(" number > [ $ret ] "\)" | SQR "\(" number > [ $ret ] "\)" | SQRT "\(" number > [ $ret ] "\)" | EXP "\(" number > [ $ret ] "\)" | SIN "\(" number > [ $ret ] "\)" | COS "\(" number > [ $ret ] "\)" | TAN "\(" number > [ $ret ] "\)" ; integer > [ int ret ] : T_INT ;

Semantics: 

At least one light source and one surface have to be defined for a scene to be complete.



Include-files follow the same structure, but the entries param and lights will be ignored.



In case of multiple definitions only the last one is valid.



The scope of a surface is the file it is defined in and all files it includes. Local surface definitions override all previous (global) definitions.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference 

39

An optional transformation matrix will be applied to all objects embraced between BEGIN and END. If the block contains more than one object, a subscene is created. This is an easy way of hierarchically structuring the scene.

Comments follow the C++ syntax and are either started by a double slash (==) extending to the end of the line or embraced by ’=’ and ’=’. Nesting of multi-line comments (=  :::  =) is not permitted.

Supported Geometric Objects The following table shows a ray traced image and a short description of all geometric objects supported by MRT. All images where rendered at a resolution of 400x400 pixels.

Cubic Bézier Surface: defined by a control grid of 4  4 points (transformed by the accumulated transformation matrix).

Ellipsoid: defined by its center and the length of its main axes. The object is transformed by the accumulated transformation matrix.

Metaballs: defined by a list of spherical or elliptical density functions given by a density value, a center, and a radius or the length of the main axes. The object is transformed by the accumulated transformation matrix.

3D TrueType String: defined by bottom left point, character height, stroke width, stroke depth, and orientation. The object is transformed by the accumulated transformation matrix.

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

Box: axes-aligned cube defined by its center and half of its side lengths in x; y; and z. This does not introduce any restrictions on a cube as any 3D transformation can be applied.

Cone: defined by center of base, top of cone, and radius. The object is transformed by the accumulated transformation matrix.

CSG Expressions: solid objects (all objects listed under csgobj in the scene description grammar) can be combined by boolean operators union, difference, and intersection.

General Cylinder: defined by center of bottom and top, and radius of bottom and top. The object is transformed by the accumulated transformation matrix. This object can be seen as a generalization of a cylinder with top and bottom having a different radius.

Parallelogram: defined by three points (transformed by the accumulated transformation matrix).

based on Fellner D.: Extensible Image Synthesis [Fel96]

40

MRT++ – Design Issues and Brief Reference

41

Quadrangle: defined by four points (transformed by the accumulated transformation matrix).

Sphere: defined by center and radius. The center is transformed by the accumulated transformation matrix but the radius is only scaled by an ’averaged’ scale factor, i.e. the sphere won’t become an ellipsoid.

Superquadric: defined by

 x  px  y  py  z  pz j j

a

+

j j

b

+

j j

c

=1

The parameters are its center, half of its side length in x; y; and z, and the positive exponents px, py, and pz. The object is transformed by the accumulated transformation matrix. Rotated Bézier Curve: Let p[0],p[1],...,p[num-1] be the sequence of input points. These points are located in the xy-plane (z = 0) and build the 2D-Bézier curve that is rotated around the x-axis. The following constraints have to be met: 1. p[0].x() < p[1].x() < . . . < p[num-1].x() 2. p[0].y() >= 0, p[i].y() > 0, for 1  i  num-2,p[num-1].y() >= 0

Triangle: defined by three points (transformed by the accumulated transformation matrix).

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

42

Supported Lightsources To illustrate the visual effects of the available light source types, each row of the following table shows the same scene lit by a different light source.

Attenuated distributed light: Distributed light source with simulated penumbrae with global attenuation

Attenuated distributed spotlight: A distributed spotlight source with simulated penumbrae and global attenuation

Attenuated light: A point light source with attenuation (both global and by blocking objects)

Attenuated spotlight: A spotlight source with attenuation (both global and by blocking objects)

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

Distributed light: A distributed light source with simulated penumbrae without global attenuation

Distributed spotlight: A distributed spotlight source with simulated penumbrae without global attenuation

A positional light source

Lightobject: An area light source of arbitrary geometry

A positional spotlight

based on Fellner D.: Extensible Image Synthesis [Fel96]

43

MRT++ – Design Issues and Brief Reference

44

Texturing Solid Texturing is – due to its nature – available for all objects (it’s a pure feature of the surface type and not a feature of the object itself). 2D-Texturing builds on method mapInvers() and thus is only available for objects supporting inverse mapping. Currently, these are the objects Sphere, Triangle, Quadrangle, Parallelogram, Ellipsoid, and Torus. The following table illustrates the available solid texturing modes.

Noise solid texture: parameters are color range minimum and maximum

Turbulence solid texture: parameters are color range minimum and maximum

Marble solid texture: parameters are the base color and the vein color

Wood solid texture: given by the distance between and the width of the cylinders

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

45

Concentric spheres solid texture: parameters are the two colors and the center of the pattern

Corona solid texture: parameters are two colors, the center and radius of the corona and a turbulence value from 0 to 1

Textures that affect geometry instead of color are called bump maps. Each time a ray hits the surface, the original surface normal at that point is perturbed according to a rule. The following table illustrates this effect.

Random bump map: a bump map consisting of randomly perturbed normals with a specified amplitude distributed among unmodified normals

Sine bump map: a function-based bump mapped surface with sinusoidal perturbation

PNM bump map: the bump map is generated from a specified PNM image file via intensity gradients

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

46

Command Line Parameters Parameters specified at the command line will always override parameters specified in the scene description file. Additionally parameters can be preset by environment variables or in your .Xresources file (UNIX) or in the win.ini file (Windows). -? or -h : help – display usage -help : help – display usage and currently active environment parameters with their settings -Q p0 : BRep approximation quality -a diff psd : anti-aliasing with max color difference diff and max pixel subdivision psd; diff, psd  0 -i in : scene description file (default extension .msd) -e reclevel : maximum recursion level. The default value of 5 can be overloaded this way or by the token MAXLEVEL in the scene description file (a value of 0 will prevent the tracing of refracted or reflected rays) -o out : picture output file (default extension .ppm) Parameter "-o -" causes output to standard output and automatically selects quiet mode. -B d spl : use binary space partitioning with max depth d; only subdivide bsp voxels that contain more than split limit spl objects -O d spl] : use octree space subdivision with max depth d; only subdivide octree voxels that contain more than split limit spl objects -A : perform adaptive bounding volume optimization -S num : use (regular) space subdivision with num voxels -H : combine bounding volume optimization and regular space subdivision; use in combination with -A option -p l t r b : sub-section of image to be computed; (left,top), (right,bottom) has to be within [(0,0),(hor-1,ver-1)] -q : quiet mode. Runtime information and statistics will not be displayed. -r hor ver : horizontal and vertical image size -t time : abstract time control (seconds) that can be referred to in scene description files -T qual : triangulation quality; qual  0, default=0) -v w|s[a] : (pre)view using ireframe or urfaces; optional char ’a’ causes display of coordinate axes. -V : print version number

Program Modules MRT’s program modules are organized in several groups: brep (boundary representation), form (geometric objects), formbezier (bezier patch object), formhfield (height field object), formsor (solid-of-revolution objects), lights (different types of light sources and collections of lights), misc (input, images, bounding volumes, and various utility classes), parser (scene description grammar), rad (radiosity subsystem), rayintersect (raytracing subsystem), scenes (ray acceleration, i.e. voxel, octree, bsp, hybrid, hierarchical,regular), shdr (Phong surfaces, BRDF, BTDF), shdrbumpmap(various bumpmap surfaces), shdrraymc (Monte Carlo raytracing, photon map), shdrraytrace (sampling, irradiance cache), shdrtexture(2D Textures, transparent textures, solid textures), volumes (volume rendering subsystem, volume shaders, volume objects), vrml (VRML 1.0 parser). Geometric Objects Individual object modules implement construction, intersection, intersection test, normal vector and bounding volume computations, inverse mapping, and triangulation. All geometric objects are derived from base class

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

47

t_Object in module t_object.*. Code regarding triangulation and handling of boundary representations is stored in modules prefixed with ’a_’ instead of ’t_’. Thus, ’x’ in the following table stands for either ’a’ or ’t’.

3D TrueType String Cubic Bézier Surface Box CSG Expressions Cone Disc Ellipsoid General Cylinder Metaball (density function) Parallelogram Quadrangle Solids of Revolution (SOR), base class SOR – Bézier Kurve SOR – Polyline Kurve SOR – Elliptical Hyper-Torus Sphere Superquadric Tetrahedron Torus Triangle Cyberware Scanner Object

– – – – – – – – – – – – – – – – – – – – –

x_ttf3d.* x_bezier.*, t_grid.* x_box.* x_csg.* x_cone.* x_disc.* x_ellips.* x_gencyl.* x_mball.* x_parall.* x_quadra.* x_solrev.* x_srbezi.* x_srpln.* x_srtoru.* x_sphere.* x_supqu*.* x_tetrah.* x_torus.*, t_poly.* x_triang.* x_3dscan.*

Scenes t_scene.*: class t_Scene is the building block for structured scenes built from elementary objects. Class t_Scene is derived from class t_Object and implements a (sub)scene (intersection computations and tests

as well as computation of a bounding box enclosing the (sub)scene. t_voxscn.*: voxel based structure resulting from regular subdivison of the bounding volume containing the complete scene. Used for speeding up the ray tracing process. Derived from base class t_Scene. t_voxel.*: implements a single voxel for a voxel-based scene. Each voxel maintains a list of all elementary

geometric objects having a nonempty intersection with it. t_octscn.*: octree-based scene for ray tracing acceleration. Instead of a regular subdivision like in module t_vscene an octree is used for the spatial data structure holding the information which elementary geometric object is covering which area of 3D space. Derived from base class t_Scene. t_octnod.*: implements a node in the octree data structure. An octree is a hierarchical data structure that

describes the distribution of objects of a scene throughout the 3D space occupied by the scene. An overview about octree algorithms and related data structures is given in: A.Watt, M.Watt: "Advanced Animation and Rendering Techniques", Addison Wesley, 1992 t_bspscn.*: implements binary space partitioning. Used for speeding up the ray tracing process. Derived from base class t_Scene.

Surfaces t_surf_s.cc: implements the ray tracing variant of method t_Surface::shade() thus shadowing the default im-

plementation of this method in the common sources library. based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

48

t_2t8b.*: 2D texturing for objects implementing method mapInvers(). The 2D textures must be defined by a

ppm-file not exceeding 256 colors. Maintains internal color table utilizing the fact that texture entries can be stored as single bytes. The texture can modify the diffuse reflection or the specular transmission of a surface. t_2t24b.*: 2D texture defined by a ppm-file with an arbitrary amount of colors (each texture entry takes 3 byte

internal storage). t_stcsph.*: solid texture defined by two concentric spheres. t_stwood.*: solid texture modeling wood. t_stnois.*: solid texture defined by a noise function. t_stturb.*: solid texture defined by a turbulence function (derived from noise texture). t_stmarb.*: solid texture modeling marble (derived from turbulence texture). t_stcoro.*: solid texture modeling a corona surface (derived from turbulence texture). t_bump.*: implements a function-based bump mapped surface with random perturbation.

Light Sources t_light.*: base class of a single light source implements a positional light. t_alight.*: attenuated positional light. t_dlight.*: distributed positional light. t_adligh.*: attenuated distributed positional light. t_dirlit.*: directional light. t_spligh.*: spotlight derived from positional light. t_asplig.*: attenuated spotlight. t_dsplig.*: distributed spotlight. t_adspli.*: attenuated distributed spotlight. t_ligobj.*: light object. t_lights.*: collection of light sources including efficient iterators.

Radiosity t_hrpat.*: patch data structure for hierarchical radiosity t_hrscen.*: derived from t_IlluminatedScene, implements the hierarchical radiosity algorithm t_link.*: links for energy tranfer between patches t_mesh.*: meshing routines to refine objects to a given curvature t_radfac.*: exchanges the faces used by all BRep routines, these faces provide a pointer to the radiosity patches

based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

49

t_radff.*: form factor calculation t_radpat.*: base class for radiosity patches t_radtxtr.*: implements radiosity textures, i.e. the radiosity of subdivided patches is stored in a single texture

map Volume rendering t_vdcont.*: abstract base class for all volume data objects which are defined by a continuous function t_vdgrid.*: abstract base class for all volume data objects whose data is stored in a grid structure t_vdregg.*: defines a regular grid t_volbln.*: defines a volume which is shaded by the Blinn model t_vollvy.*: defines a volume which is shaded by the Levoy model t_vtflin.*: defines a generic piecewise linear transfer function t_vddscr.*: abstract base class for all volume data objects whose data is defined in discrete space t_vdnois.*: realizes the noise function (R3->R) t_vdturb.*: realizes the turbulence function (R3->R) t_voldat.*: abstract base class for all volume data objects which are used by volume objects t_volobj.*: base class for all volume objects vhp_istr.*: reads HP’s voxel data format

Other Modules t_image.*: implementation of class t_Image which controls the computation of the image. The main methods to compute an image are rayTrace() and approximate(). t_img_aa.*: adaptive anti-aliasing for ray tracing. Derived from class t_Image this class overloads method rayTrace() and fires the initial rays adaptively. t_mctrl.*: general input module: parsing of commandline parameters and scene description and setup of all

variables. msd.g,msd2.g,vrml.g: parser for scene description – input for PCCTS rtstacks.*: interface definitions for the various stacks used during parsing of the scene description. rtvar.*: handling of variables in MSD files during parsing of scene descriptions. rtdata.cpi: data module holding ’global’ data (e.g. version number, instance to accumulate runtime statistics,...) rtstats.*: class t_RTStats collects various ray-trace statistics like the number of object intersections, number of

object intersection tests to determine shadow rays, ... t_bvol.*: class t_BVol implements a bounding volume enclosing objects or (sub)scenes. t_aa_box.*: class t_AA_Box implements an axes-aligned box based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

50

t_scenit.*: utility for traversing hierarchical scenes. t_idgen.*: utility for generating unique ID’s of integral type (e.g. used for identifying rays in ray acceleration

techniques). t_rttime.*: class t_RTTime provides an abstract time to the ray tracer (control of effects like motion blurr or

scene changes for animation sequences). t_iscene.*: class t_IllumScene implements an illuminated scene which is the top level visualization structure

holding all light sources, the scene’s background color, and all geometric objects in the scene. Modules from the GEN Library Further to the modules specific to MRT the following modules from the GEN library are used: t_camera.*: implements class t_Camera which, after initialization with the viewing parameters, provides the

rays casted onto the scene t_cmdopt.*: class t_CmdlineOpt handles commandline options t_env*.*: environment handling t_timer.*: class t_Timer for runtime statistics new_hand.*: error handling utilities for new() (e.g. out of memory) fn_utils.*: low level utilities (strlwr(), appendExtension(), filepath()) file_io.*: low level I/O utilities t_real.*: basic data type representing all non-integral values t_3x3mat.*: class t_3x3Matrix implements 3  3 matrices (transformations without translation) and operations

on them t_4x4mat.*: class t_4x4Matrix implements 4  4 matrices and operations on them t_2dvect.*: class t_2DVector implements a vector or point in 2D with some operations t_3dvect.*: class t_3DVector implements a vector or point in 3D with some operations t_veclst.*: classes to maintain lists of 2D and 3D vectors t_vecdiv.*: various classes used as an interface to display packages or to standard graphics packages like PHIGS t_color.*: class t_Color implements a color value (RGB) with some operations

B

Example

The following example shows a simple MRT application, that creates and renders a scene, containing 3 colored objects and a lightsource. At the beginning the environment handling is initialized, to be able to access default parameters. The next step creates the surface shaders and the geometric objects that comprise the scene. A positional lightsource is created and stored together with an ambient light in an array of lights. The scene and the lights build an illuminated scene. Together with a camera, the illuminated scene is stored in a full scene data structure. Depending on the desired output quality the scene can now be ray traced or rendered with the polygon renderer. To ray trace the scene, we have to provide a filename for the output and a resolution. These based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference

51

parameters are stored in the MRT control object and the ray tracer is activated by calling method render(). For the polygon renderer the objects have to be tesselated first, which is done by method approxShape(). After initializing the polygon renderer CGI3D with the desired shading model (flat, gouraud, ...) the camera and the lights, method render() displays the rendered scene in a window. // // This samples illustrates the use of MRT without reading a scene description // by any parser. Instead, the whole scene is created on the fly and either // a raytraced image (#define RAYTRACING) is written to file mrttest.ppm // or a gouraud shaded approximation is rendered into a window by CGI3D. // //#define RAYTRACING #ifndef RAYTRACING #include "t_cgi3d.hh" #include "t_acttri.hh" #endif #include "t_mctrl.hh" #include "t_sphere.hh" #include "t_box.hh" #include "t_cone.hh" #include "t_bqctrl.hh" int main(int argc, char **argv) { int sizeX = 100; int sizeY = 100; // 0. initialize environment handling t_EnvironmentRegister& r=t_EnvironmentRegister::instance(); r.setCommandLine(argc,argv); r.processCommandLine(t_EnvironmentRegister::CommandlinePrio); // 1. create some shaders t_SurfaceShaderPtr red = new t_SurfaceSpecTrans( t_Color::Red(), t_Color::Red(), t_Color::Red(), 8, t_Color::Red(),1); t_SurfaceShaderPtr yellow = new t_SurfaceSpecTrans( t_Color::Yellow(), t_Color::Yellow(), t_Color::Yellow(), 8, t_Color::Yellow(),1); t_SurfaceShaderPtr green = new t_SurfaceSpecTrans( t_Color::Green(), t_Color::Green(), t_Color::Green(), 8, t_Color::Green(),1); // 2. create some objects #define NUM_OBJ 3 t_ObjectPtr* objs = new t_ObjectPtr[NUM_OBJ]; objs[0] = new t_Sphere(t_3DVector(0,0,0),30,red); objs[1] = new t_Box(t_3DVector(0,-40,0),20,10,20, t_4x3Matrix::identity(),yellow); objs[2] = new t_Cone(t_3DVector(0,30,0),t_3DVector(0,50,0), 10,t_4x3Matrix::identity(),green); // 3. build a scene with these objects t_ScenePtr scene = new t_Scene(objs,NUM_OBJ); // 4. create a lightsource t_LightPtr* light = new t_LightPtr[1]; light[0] = new t_Light(t_Color(0.8,0.8,0.8), t_3DVector(0,200,-100)); // 5. and put it into the lights array t_LightsPtr lights= new t_Lights; lights->init(1,light); lights->ambientLight(t_Color(0.2,0.2,0.2)); // 6. a fullscene is just a container of an illumscene (=scene+lights) // and the camera t_FullScenePtr fscene = new t_FullScene; fscene->illumScene(new t_IllumScene(scene,lights, t_Color(0,0,0.2), 3,0)); fscene->camera(new t_Camera(t_3DVector(0,20,-140), t_3DVector(0,0,0), t_3DVector(0,1,0), based on Fellner D.: Extensible Image Synthesis [Fel96]

MRT++ – Design Issues and Brief Reference 20,20, sizeX,sizeY, 0.1,1000)); #ifdef RAYTRACING // dieser Teil ist wichtig falls NUR raytracing gewuenscht wird // 7. create the MRTControl, and initialize the raytracer // render() will raytrace the scene into a .ppm file (mrttest.ppm here) t_MRTControlPtr mrt = new t_MRTControl(); mrt->prepare(fscene); mrt->setRaytracer("mrttest.ppm"); mrt->sizeX(sizeX); mrt->sizeY(sizeY); mrt->render(fscene); #else // dieser Teil ist fuer das approximative rendern mittels // cgi3d verantwortlich // generate BReps... //fscene->illumScene()->scene()->approxShape(t_QMPQBasedOld(0.5)); objs[0]->approxShape(t_QMPQBasedOld(0.5)); objs[1]->approxShape(t_QMPQBasedOld(0.0)); objs[2]->approxShape(t_QMPQBasedOld(-0.5)); // open window t_Cgi* cgi = new t_Cgi (sizeX, sizeY, false, "MRT"); cgi->colorSignificantBits(0xFF,0xFF,0xFF); t_Cgi3D* cgi3d = new t_Cgi3D(cgi); t_ActiveTriangle* shadingModel[5]; t_Rendering* renderingModel[2]; shadingModel[0] = new t_ActiveTriangleWireframe(); shadingModel[1] = new t_ActiveTriangleFlat(); shadingModel[2] = new t_ActiveTriangleGouraud(); shadingModel[3] = new t_ActiveTrianglePhong(); shadingModel[4] = new t_ActiveTriangleEdge(); renderingModel[0] = new t_RenderingZB(); renderingModel[1] = new t_RenderingWireframe(); cgi3d->shading (shadingModel [1]); cgi3d->rendering (renderingModel[0]); cgi3d->backgroundColor(t_Color::Blue()); cgi3d->camera (fscene->camera()); cgi3d->lights (lights); // render cgi3d->render(scene); Cvalidity valid_kbd; char ch; cgi3d->requestKeyboard(valid_kbd,ch); delete cgi3d; delete cgi; delete shadingModel[0]; delete shadingModel[1]; delete shadingModel[2]; delete shadingModel[3]; delete shadingModel[4]; delete renderingModel[0]; delete renderingModel[1]; #endif }

based on Fellner D.: Extensible Image Synthesis [Fel96]

52