Visual Interaction Platform - CiteSeerX

18 downloads 20466 Views 301KB Size Report
IPO, Center for User-System Interaction, Eindhoven University of Technology ... Abstract: The Visual Interaction Platform (VIP) is a Natural User Interface (NUI) that builds on human skills of ..... combination we call the “Electronic paper” (see.
Visual Interaction Platform Dzmitry Aliakseyeu, Jean-Bernard Martens, Sriram Subramanian, Marina Vroubel, Wieger Wesselink IPO, Center for User-System Interaction, Eindhoven University of Technology Den Dolech 2, Eindhoven 5600 MB, The Netherlands d.aliakseyeu, j.b.o.s.martens, s.subramanian, m.vroubel, j.w.wesselink @tue.nl Abstract: The Visual Interaction Platform (VIP) is a Natural User Interface (NUI) that builds on human skills of real world object manipulation and allows unhindered human-human communication in a collaborative situation. The existing VIP is being extended towards the VIP-3 in order to allow support for new kinds of interactions. An example of a natural augmented reality interface to be realized on the VIP-3 is a pen-and-paper interface that combines properties of real pen-and-paper with typical computer functionality for flexible re-use of information. Two ongoing research projects on the VIP-3 are also briefly discussed: the first project develops supporting tools for early architectural design, while the second project aims at supporting 3D interaction for navigating and browsing through multidimensional data sets. Keywords: Augmented Reality, Natural User Interface, Pen interaction, 3D interaction

1 Introduction Technological improvements and cost reductions in, amongst others, computing power, display and sensor technology, make the creation and handling of complex computer models feasible on an increasingly widespread scale. Visualization and interaction with such models is becoming popular in professional applications in medicine, data mining and product design. Because the models that are considered are in many cases three-dimensional, the current desktop (i.e., two-dimensional) interfaces are often experienced as being cumbersome and ill suited to the task. Therefore, there is a growing interest in creating interfaces that are better usable and more natural, i.e., “that enable to reach beyond the screen that separates the real world from the virtual model”. The obvious goal is to improve the efficiency, effectiveness and pleasure with which interaction tasks are performed. If improved interaction styles can be developed for these professional applications, then these new developments are also very likely to have an impact on more consumer-like interactions with computerized systems when system costs come down even further.

Two increasingly popular ways of designing interfaces are using Virtual Reality (VR) or Augmented Reality (AR). A VR system positions the user in a graphical representation of a computergenerated model, with the intention of completely immersing the user in this virtual world. VR systems require the user to wear devices like head-mounted displays, head trackers, data gloves, etc. which intrude on the users personal space (as defined in (Cutting and Vishton, 1995)). These devices often seriously limit the ability of the user to interact in a social environment. Concerns have also been raised in both scientific and popular journals about possible harmful effects of such devices (Seymour, 1996), (Ijsselsteijn et al, 1999). In an AR system, the real world of the user is being augmented with graphical or virtual information in order to enhance the users real world. The realization that people are often communicating with their environment while performing a task is an important design aspect that is respected more in AR than in VR. Another important design principle of AR systems is that they try to make optimal use of the well-developed human skills of (two-handed) interaction with real objects. One of the key technological problems in VR, i.e., providing good haptic feedback, is circumvented in this way.

There is an increasing awareness that the interaction between users and computerized systems should be more natural, i.e., when done well the interaction shouldn’t feel like a human-computer interaction but more like a human-task interaction. The focus should be on interacting through the computer instead of on interacting with the computer. The above arguments have motivated us to consider the specific augmented reality system that is presented in this paper. The proposed visual interaction platform (VIP) enables different natural interaction styles such as writing and sketching with pen and paper, manipulation of real objects in two or three dimensions, etc. The work presented in this paper is part of a long-term research effort of the group to integrate different interaction techniques into one platform and to explore new applications of such a combined platform in different areas. Especially applications that profit from a combined use of different modalities, i.e., that use multi-modal interaction, are targeted. The remainder of this paper is organized as follows. In section 2, we introduce the firstgeneration VIP. We discuss both the design concept of the platform and some hardware and software aspects of the implementation. In Section 3 we discuss the next generation VIP-3 platform that is currently being assembled. This VIP-3 will have more modalities and functionality than the existing VIP. In section 4, we motivate our interest in creating and applying electronic pen-and-paper interfaces with a natural look and feel, and describe such a prototype interface that has been realized on the existing VIP. In section 5, we discuss two projects that have recently started, and that will make use of the more advanced properties of the VIP-3 system.

2 First generation VIP 2.1

Hardware and interaction

The hardware configuration of the VIP is similar to the hardware configuration of the commercially available BUILD-IT system (Rauterberg et al, 1997) and is shown schematically in Figure 1. A single Intel Pentium® II PC operates all components in the system. The VIP uses a video projector to create a large computer workspace on the horizontal surface of a table. This horizontal workspace is called the action-perception space. Instead of using the traditional keyboard and mouse for interaction the user can interact (perform his/her actions) with the VIP system using physical objects such as small bricks. These bricks are coated with infraredreflecting material and there is an infrared light

source located above the table next to the projector.

Figure 1: The Visual Interaction Platform

A camera located next to the infrared light source and the projector tracks the movements of the interaction elements (from now on mostly referred to as bricks). The user interacts with the system by modifying the location(s) and orientation(s) of these brick(s). Unlike in the current desktop environment, where the mouse actions and the cursor movements occur at separate positions, visual feedback in the VIP system occurs at the positions occupied by the bricks. Therefore, the action and perception spaces (Smets et al, 1995) of the user coincide much more closely. Apart from this horizontal action-perception space, the VIP can also project a second image on a (vertically oriented) wall. This optional second image is most often used to supply the user with more extensive visual feedback for increased spatial awareness, or to communicate with remote participants. It is therefore referred to as the communication space. The main features of the VIP are • the action and perception spaces coincide; • two-handed interaction is possible; • multiple users can collectively interact at the same time, using separate interaction elements, thereby promoting group work; • easy-to-learn interaction style that requires little or no computer skills; • the users do not have to wear intrusive devices like head-mounted displays; • there are no messy wires to hinder user movements.

2.2

Software

The VIP++ software consists mainly of a C++ library that was developed in-house in order to simplify the

programming and testing of applications involving video-based interaction. The VIP++ software contains a number of data type definitions, algorithms and classes for accessing, processing and storing images. Camera images in the VIP are acquired by means of the Leutron Vision® image processing board (frame-grabber card) and Daisy library (Leutron Vision®). The class “DaisyGrabber” allows easy access to camera images from within an application program, while the class “GridTransformations” applies the mapping between camera coordinates and projection (screen) coordinates that is required for accurate visual feedback. A fully automated calibration program projects a test pattern that is subsequently captured by the camera and analyzed to establish the transformation between both device coordinate systems. This calibration is required in order to guarantee that the visual feedback provided by the projector occurs at the actual brick positions. The image analysis in the VIP is fairly simple and is implemented in two stages. In the first stage, a “flood-fill” algorithm identifies regions in the image that share a common value (i.e., the white blocks). In the second stage, a labeling algorithm selects connected regions in a binary (black and white) image and calculates object features (such as area, moments, principal axes) for these regions. These object features allow to estimate the positions and orientations of the bricks.

2.3

Applications

Several applications have already been developed and tested on the VIP. The first application that was developed was an interface for a medical image browser. Many hospitals have regular meetings to discuss patient records. During such meetings, a large light box, which is preloaded with lots of images, is currently used to view images. First, this way of working poses logistic problems (photos have to be developed, collected and mounted before the meeting, usually within a limited time frame). Second, the approach only affords limited flexibility (for instance, processing images on-line and using image sequences is prohibited). Third, the viewing situation is typically such that only few participants can observe an image at the same time, which obviously is not beneficial to the discussion. The VIP was used to demonstrate a possible alternative working environment for this application. The communication space of the VIP was used to project one or more images that are (currently) considered for closer examination. The action-perception space allowed to select between records of different patients, and to

pick one or more images (presented by small thumb nails) for close-up viewing. A user-centered approach was used to create an interface with a minimal number of system functions. Medical experts were confronted with the system. The most important observations can be summarized as follows: 1) all users were able to use the system straight away and 2) users enjoyed the new style of interaction. Another application that was developed on the VIP platform is the Photo Share Tele-application (de Greef and IJsselsteijn, 2000). Here two VIP platforms that were connected over a network were used to study the effect of social presence and satisfaction on informal home-environment users in a tele-application. The application allowed for shared viewing of photos (e.g. Holiday, Family pictures) between two (or more) family members at different locations. The distributed platform was asymmetrical. There was a presenter and a viewer. The presenter used the action-perception space to select pictures for viewing in large. The distributed platform included audio and video communication channels between the two remote locations. The optional communication space was used to present the video connection to the remote partner(s). Figure 2 illustrates the set-up. The most important observations were that the communication space had a positive effect on the feeling of social presence and that extensive functionality in the action-perception space was of little importance and could even diminish social presence. Place I Presenter

Place II Viewer

Figure 2: The PhotoShare Tele-Application

3 New generation: VIP-3 The existing VIP is currently being extended to what is called the VIP-3 (see Figure 3). This is done in order to support new interaction styles. One of the most important modifications is that we want to create the possibility of interacting above the table, hence extending the 2-D action-perception space of the table top to a 3-D interaction space above the table.

4 Pen-and-Paper interface

Figure 3: The VIP-3

We also want to enrich the existing 2-D interaction space. For this purpose, the existing table surface of the VIP has been replaced with a DrawingBoard® (CalComp®). This DrawingBoard® can accurately record pen movements. Amongst others, this allows the user to perform more precise actions in the action-perception space, such as required for handwriting, drawing and sketching. In order to control the size of the workspace, the projector projects the image onto the table via a mirror, and the distance between projector and mirror can be varied. The communication space in the VIP-3 is equipped with a stereo back projection system, which allows 3D visualization. Back projection also avoids that the user is blocking the image from the projector while interacting above the table. Two cameras are used in the VIP-3. The first camera works in infrared and has a similar functionality as in the current VIP system. In combination with a structured light source, this camera can also measure depth, so that it can scan the 3-D shape of objects. The second (color) camera operates in visual light and allows to grab images of real objects that are placed in the action-perception space (pictures, photos, etc.). The first camera image determines the required pan, tilt and zoom for the second camera. Two cameras can also be used to perform stereo observations of the action-perception space. The platform is equipped with an Intergraph® workstation, including two Pentium® III processors. Currently, the in-house VIP++ software is used for accessing, processing and storing images. Software for pen input and 3D input are under development. In the future we plan to make more use of the Intel® Image Processing Library and the Open Source Computer Vision Library.

People are taught drawing and sketching, alongside with speech, from a very early age. Writing enters life a bit later. Most people can not recall themselves without these skills. From this point of view, pen input is as natural as speech and should certainly be considered as an important input modality in any Natural User Interface (NUI). Pen and paper are traditional companions in many creative activities: “From mechanical engineering to the graphic art, designers comprehensively reject the use of computers in the early, conceptual/creative phases of designing... designers prepare to use paper and pencil” (Gross and Yi-Luen Do, 1996). It is generally accepted that drawing or sketching supports thinking, recollection of earlier designs, making associations, etc. and is hence very valuable in developing and shaping ideas. Writing has mostly been considered as a possible (and not very reliable) replacement of the keyboard in computer interfaces, and its use is currently almost entirely restricted to palm-top computers. It is only recently that the importance of providing good sketching interfaces starts to be recognized (Trinder, 2000). One of the early NUIs, called DigitalDesk (Wellner, 1993), was designed for the purpose of combining real paper and electronic documents in augmented reality. The system recognized inputs from pen or finger through the use of a digitizing tablet. It used a video camera to capture paper documents placed on the desk, and responded to interactions by projecting electronic images down onto the desk. The interactions were mostly ‘select’ and ‘cut and paste’ operations. Let us now briefly consider pen-and-paper-like interfaces from a more technical point of view. We are especially interested in comparing digital pen input with possible alternatives (for instance, capturing graphical information through monitoring with a video camera, or offline scanning from a piece of paper). First, modern digitizers do not only record pen positions, but also capture temporal and dynamic information, pen-pressure and pen-inclination. Second, sampling up to 200 points per second is typical for current digitizers, whereas 50 or 60 frames per second is what can be expected at best from video cameras. Third, digitizers supply spatial and temporal coordinates of the pen on-line, while camera images require additional, non-trivial (and hence error-prone) recognition. Fourth, modern digitizers supply spatial coordinates with a typical resolution between 0.01 and 0.05 mm. The spatial resolution of cameras is considerably worse. Accurate and detailed spatial and dynamic

information describing writing and drawing is necessary for a high quality recognition and interpretation. Therefore, it is not surprising that successful implementations of pen-based interfaces for creative design use digitizers and electronic pens as input devices (Gross and Yi-Luen Do, 1996), (Lin et al, 2000), (Landay, 1996). Although video images are ill-suited input devices for sketching and writing, they may still have an added value for creating more natural pen-andpaper interfaces. More specifically, rotation and translation of the paper while drawing or sketching are very natural and almost subconscious actions. These actions are difficult to derive from only a digitizer input. Restricting the user in the possibility to accomplish them can potentially change his/her attitude to a pen-and-paper-based system. We are therefore interested to learn how freedom in handling the piece of paper influences user experience and task performance. Technically, video images can potentially be used to record the size, position and orientation of a real piece of paper, or to help manage these parameters for a virtual piece of paper. In figure 4, an example of such an interface is shown (Vroubel et al., 2000). By moving the brick in the non-dominant hand, the user can change the position and orientation of a virtual piece of paper. A pen in the dominant hand can be used to write or draw on the virtual paper.

outputs. From the point of view of the designer, he/she should be sketching and writing on paper in a way that closely resembles current practice. Meanwhile, the system is capturing pen and paper movements, and can add virtual information that supports the design on demand. Computer vision may potentially be used to recognize characters, symbols or gestures made by the user. In cases where such recognition is intended to influence the system’s behaviour, we intend to follow a “lazy” recognition strategy (Nakagawa et al, 1993) that avoids interrupting one’s thinking by the premature display of recognition results. We want to separate sketching or writing (i.e., the input phase) from performing tasks with the generated input. Both activities are creative and demand attention. Also in communications between humans, these activities are often separated in time. People typically first shape their ideas before discussing the correctness of interpretation and understanding with others. Within the framework of this project we intend to implement and test pen-based interfaces for correcting both the lay-out and interpretation of graphical images. More specifically, we are currently working on using penbased input in order to enter electronic circuits into a simulator program.

5 Current projects In this section, we discuss two ongoing projects that aim at applying the VIP platform and extending its functionality. The first project, at the application level, focuses on creating more natural interfaces for early architectural design, while the second project, at the task level, studies interaction styles for manipulating 3D (volume) data such as typically encountered in medical applications.

5.1 Figure 4: Example of a pen-and-paper interface

While the input resolution of digitizers is very high, the quality of visual feedback through projected images is far from perfect: “Note that the high graphical resolution and contrast of a ballpoint trace on plain white paper is unsurpassable with current electronic-paper devices” (Schomaker, 1998). We propose that a system that combines the high input resolution of digitizers with an excellent visual output on real paper would be a step ahead in the implementation of pen–based interfaces. We therefore intend to study the possibility of including real, rather than virtual, paper as one of the visual

Architectural design

The architectural design process can be divided into four phases: sketch design phase, preliminary design phase, definitive design phase, final (shop) design phase. The early architectural design stage includes the sketch and preliminary design phases. The early design stage is rough and vague. In this stage the architect defines concepts and basic ideas for the shape and construction. Later phases of the architectural design process have decreasing impact on the final result since the architect has less freedom of choice in each successive stage. Currently after creating a preliminary design (early design stage) on paper and/or in a scale model a designer typically translates his/her design to a CAD program. Transition from the sketches to strict and exact representation of design in CAD programs

demands a lot of time and effort. Most of the CAD programs are developed for the later design stages (Suwa and Tversky, 1996). These programs contain tools for the definitive and precise drafting and modeling. Only few tools are available to aid designers in the former process so that freehand sketches and physical modeling remain a kind of art that only skilled and prolific designers have (Suwa and Tversky, 1996). A computer tool for early design may reduce the problem of transferring drawings from the early to the later design stages, but they do so at the cost of barriers in the creative process (compared with sketching on paper) (Lawson, 1999). The problem of introducing computer technologies at this stage is that the architect needs freedom, speed, ambiguity, vagueness, and absence of strictly predefined rules, which are not offered sufficiently by currently available tools. No tools seem to exist that can compete with these traditional ways of designing.

aims at extending the functionality of the VIP to allow performing certain tasks in 3D. For instance, in surgery planning, surgeons may want to set out a trajectory in 3D in order to carry out a biopsy. This trajectory should obviously avoid vital tissues. The MRI data that may be used for this purpose are 3D volumetric data, but due to the lack of appropriate interaction and visualization tools the surgeons are often restricted to viewing and interacting with 2D slices that appear in planes orthogonal to canonical axes through the patients anatomy. This makes it very difficult to plan surgical paths which are potentially useful but along oblique directions (Hinckley et al, 1998). Other related application domains are: the exploration of geological data for determining the quality of a gold mine (Johnson and Bacigalupo-Rose, 1993) or for determining the usefulness of an oil well (Frohlich and Plate, 2000), (Smith, 1999), the exploration of car-crash data (Frohlich and Plate, 2000) in order to analyze the extent of potential damage to people seated in the car. The tasks that occur repeatedly in all the above applications can be described as obtaining spatial awareness, selecting cross-sections and planning trajectories. We propose the following interaction style. The

Figure 5: The Virtual and Real Paper

We propose a solution which can possibly aid designers in an early stage of design. Using the VIP3 platform we can create a combination of virtual and real paper. The user is allowed to draw on real paper, which is synchronized with virtual paper. This combination we call the “Electronic paper” (see Figure 5). The sketches made by the user on real paper are traced into the computer. This interaction style allows preservation of the naturalness of the traditional way of sketching. Amongst others, the system can help in managing, storing and editing sketches, can assist in re-drawing and over-drawing (Trinder, 2000), and can use recognition to assist in the transition from sketches to strict and exact representations in CAD programs. Currently we work on a tool prototype in order to evaluate the usability and naturalness of the “Electronic paper”.

5.2

Manipulation of 3D data

The emphasis so far has been on providing interfaces that can perform tasks in a 2D space. This project

Figure 6: The 3D interaction device

user will be provided with a Rigid Intersection Selection Prop (RISP) with which cross-sections can be viewed for investigation. The RISP can be operated with the non-dominant hand. Based on further investigations the RISP will be tracked either using the cameras and computer vision techniques or using ultrasonic 6 degrees of freedom position and orientation tracking device of InterSense (IS-600 Mark 2, http://www.isense.com). The user can interact with the cross-section by means of a pointer prop (PP) in the dominant hand. The pointer prop will also be tracked using computer vision techniques. Figure 6 illustrates our interaction style.

The functionality of the proposed system is similar to the Personal Interaction Panel (Zsolt, 1999) developed by the computer graphics group in Vienna, but the interaction style takes into account design requirements for a more natural interface (Subramanian and IJsselsteijn, 2000). More specifically, the interaction platform is free of intrusive devices, which encroach the user' s personal space, and free of wires, that could hinder free movement.

6 Conclusions and future directions In this paper we have presented an example of an augmented-reality-based natural user interface, the Visual Interaction Platform (VIP). After introducing the first-generation VIP, we have discussed the VIP3, a next-generation system with extended functionality, which is currently being constructed. The VIP and VIP-3 support different modalities for interaction in a natural way and are intended to promote group work (i.e., they try to respect the social context in which many tasks are performed). It should be realized that the VIP is mainly a platform for developing new multi-modal interaction styles, and that the system is obviously too complex for most specific applications. It is clearly not the intention to push the VIP as a “one size fits all” solution for such applications. The evolution of the platform to the current state has been mainly from a synthesis-by-analysis approach, where the user requirements in different applications are analyzed and system requirements for the platform are derived from these user requirements. The developments up to now have been guided by the identification of frequently occurring and important subtasks. In the future we intend to test the platform in different applications that require such tasks and to evaluate the usability and naturalness of the interaction styles developed for those tasks.

References Cutting, J.E. and Vishton, P.M. (1995), "Perceiving layout and knowing distances: The integration, relative potency, and contextual use of different information about depth". In: Epstein, W. and Rogers, S. (eds.), Perception of Space and Motion. San Diego: Academic Press, pp. 69-117. Frohlich, B. and Plate, J. (2000), "The Cubic Mouse: A New Device for Three Dimensional Input", Proceedings of the CHI 2000, pp. 526-531.

de Greef, P. and IJsselsteijn, W.A. (2000), “ Social Presence in the PhotoShare Tele-Application. ” Proceedings of PRESENCE 2000. Presented at PRESENCE 2000 - 3rd International Workshop on Presence, Delft, The Netherlands, 27-28 March 2000. Gross, M. and Yi-Luen Do, E. (1996), “Ambiguous intentions: a paper-like interface for creative design.” Proceedings UIST’96 Symposium on User Interface Software and Technology. Seattle, ACM Press, pp. 183-192. Hinckley, K., Pausch, R., Proffitt, D. and Kassell, N. (1998), "Two-handed Virtual Manipulation", ACM Transactions on CHI, Vol. 5, pp. 260-302. IJsselsteijn, W. A., de Ridder, H. and Vliegen, J. (1999), "Effects of Stereoscopic filming parameters and display duration on the subjective assessment of eye strain", Proceedings of the SPIE 3957, 12-22. Johnson B.D. and Bacigalupo-Rose S. (1993), "Threedimensional data imaging in mine geology applications", International Mining Geology Conference, Kalgoorlie - Boulder, 5-8 July 1993. Australasian Institute of Mining and Metallurgy, Publication Number 5/93, pp. 35-46. Landay, J.A. (1996), “SILK: Sketching Interface Like Krazy.” In Proceedings of CHI’96 Conference on Human Factors in Computing Systems. Vancouver, ACM, pp. 398-399. Lawson, B. (1999), “ ‘Fake’ and ‘Real’ Creativity using Computer Aided Design: Some Lessons from Herman Hertzberger.” In Proceedings Creativity& Cognition 99. Loughborough, ACM, pp. 174-180. Lin, J., Newman, M., Hong, J. and Landay, J. (2000), “DENIM: Finding a Tighter Fit Between Tools and Practice for Web Site Design.” Proceedings of CHI’2000 Conference on Human Factors in Computing Systems. The Hague, ACM, pp. 510-517. Nakagawa, M., Machii, K., Kato, N. and Souya, T. (1993), “Lazy Recognition as a Principle of Pen Interface.” INTERACT ' 93 and CHI ' 93 conference companion on Human Factors in Computing Systems, pp. 89 90. Rauterberg, M., Fjeld, M., Krueger, H., Bichsel, M., Leonhard, U. and Meier, M. (1997), “BUILD-IT: A Computer Vision-based Interaction Technique for a Planning Tool”, Proceedings of HCI ' 97, Berlin: Springer, pp. 303-314. Schomaker, L. (1998), “From handwriting analysis to pencomputer applications.” Electr.&Com Engin J. pp. 93-102.

Smets, G.J.F., Stappers, P.J., Overbeeke, K.J. and Van der Mast, C. (1995), “Designing in virtual reality: Perception-action coupling and affordances”. In: Carr, K. and England, R. (eds.), Simulated and Virtual Realities. Elements of Perception. London: Taylor & Francis, pp. 189-208. Seymour, J. (1996), "Virtually real, really sick", New Scientist, pp. 34-37. Smith, G. (1999), "Hollywood ABCNEWS.com, 10th Sept 1999.

in

Houston",

Subramanian, S. and IJsselsteijn, W.A. (2000), “Survey and Classification of Spatial Object Manipulation Techniques”, Proceedings of OZCHI 2000, Sydney, Australia , pp. 330 - 337. Suwa, M. and Tversky, B. (1996), “What Architects See in Their Sketches: Implications for Design Tools.” In Proceedings of CHI’96 Conference on Human Factors in Computing Systems. Vancouver, ACM, pp. 191-192. Vroubel, M., Markopoulos, P., Bekker, M. (2001), “FRIDGE: Exploring intuitive interaction styles for home information appliances.”, (interactive videoposter) proceedings of CHI 2001, extended abstracts, Seatle, USA., (in press). Wellner, P. (1993), “Interacting with Paper on the DigitalDesk.” Communications of the ACM 36, pp. 87-96. Trinder, M. (2000), “The Computer’s Role in Sketch Design: A Transparent Sketching Medium”. In: Godfried Augenbroe, Charles Eastman (eds.), Proceedings on the 8th International Conference on Computer Aided Architectural Design Futures. Atlanta: Kluwer Acedemic, pp. 227-244. Zsolt, S. (1999), " The Personal Interaction Panel: a twohanded interface for Augmented Reality", PhD Dissertation, Institut fur Computergraphik, TU Wien, Austria, Matr. Nr. 9326205, Sep. 1999.

Suggest Documents