Periscope – A System for Adaptive 3D Visualization of Search Results Wojciech Wiza*, Krzysztof Walczak†, Wojciech Cellary‡ Department of Information Technologies, The Poznan University of Economics, Poznan, Poland A question arises: is this method of presenting search results the best possible? We believe that the answer to this question is: sometimes. For example, a user looking for information about the Web3D conference will type a query “Web3D Conference 2004” in one of the popular search engines and, most likely, the first link on the result page is what the user is looking for. In such case, it does not make sense to replace the 2D page with a 3D world showing the link as a flying object somewhere in a 3D scenery.1
Abstract A system for efficient 3D visualization of Web search results is presented. The system, called Periscope1, uses a novel approach for adaptive and customizable visualization of complex data. The whole process is divided into a number of interactive steps. At each step, the system can automatically choose the best method of presenting search results. The user can also select a specific presentation method to focus on certain properties of the result obtained. After analyzing the current search result, the user can narrow or broaden the search query and repeat the procedure.
However, assume that a user wants to find all research institutes in the world that perform research in the field of the X3D technology. In the standard interface, the result obtained will be a list of first ten pages related to X3D, sorted according to some ranking criteria. The ranking criteria can be different, but usually reflect the accuracy of keyword matching, popularity of the web page, number of links, etc.
CR Categories: H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities; H.5.2 [User Interfaces]: User interface management systems (UIMS); I.3.7 [Three-Dimensional Graphics and Realism]: Virtual reality
The response to the above question – is this form of presentation the best possible – is: certainly not. This way of presentation is not appropriate, simply because such a query cannot be answered in this way. What a user is interested in is a holistic presentation of the whole data set satisfying this query, and not the details of the first few documents, where the “first” is defined by a ranking mechanism, which – in addition – often does not reflect the user preferences.
Keywords: virtual reality, adaptive interfaces, human-computer interfaces
1
Introduction
With the growing number of resources available in the World Wide Web, the search engines become one of the most important and most frequently used services heavily influencing the way users perceive the Internet.
To really meet the user needs in the analyzed example, we need some kind of interface that would be able to show all the results in a synthetic, yet still comprehensible way. If the textual interface fails, we need some kind of a graphical presentation of the data. Two options are possible: a 2D map and a 3D scene.
Considerable research effort has been invested in the development of efficient methods of collecting and indexing data, algorithms for query processing, as well as data caching mechanisms. The element that has remained almost untouched since the very beginning of the search engines is the presentation interface. Web search engines display search results as collections of HTML pages containing lists of resources found and short description for each of them. If the number of results exceeds one page, more pages can be generated, usually until some limit specific for each search engine.
The 2D map presentation can provide reasonably good results in some cases, however, it is missing some important features to make a real difference and provide satisfactory results. In the 2D graphical presentation, a user is presented with a flat panel, limited in size, where information may be presented in a form of color patches, sometimes textured, with different shapes – which can be distinguished when the number of objects presented is low. The user interaction is limited to selecting a point in the panel.
*
[email protected] [email protected] ‡
[email protected] †
The second option is a 3D scene. In 3D, we gain three elements, which we believe are of critical importance for this kind of systems: user interaction, user cognition, and information capacity. User interaction does not only mean navigation in the space but also interaction with the contents, like, e.g. moving and rotating the objects and selecting objects that are of interest to the user. Importantly, the interactive contents does not require a proprietary format, but can be based on open standards, and standard browsers can be used.
Copyright © 2004 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
[email protected]. © 2004 ACM 1-58113-845-8/04/0004 $5.00
1
The Periscope system is a result of a joint research project of the Poznan University of Economics in Poland and the France Telecom R&D in Rennes, France.
29
computer requires installation of special software. Only one visualization interface is offered in this system.
The second element is user cognition. A spatial metaphor for representing data is closer to the manner in which humans perceive the surrounding world. Contrary to 2D flat graphics, a 3D environment permits a user to change the viewpoint to improve perception and understanding of observed data. A user trying to discover a purport of objects in 3D space, if this purport is not clear at the first sight, may rotate or translate the objects. Information presented in this way is learned faster and more efficiently [Anders 1999, Tufte 1990].
In the system described in [Mukherjea and Hara 1999], search results are visualized on cards (3D plates) representing groups of semantically correlated documents. The system allows a user to highlight cards with documents that contain a particular keyword. The main disadvantage of this system is the chosen visualization metaphor. A user cannot have a view of the entire search result, because cards with documents cover each other. Another drawback of this system is limited amount of information presented about the documents.
The third element is information capacity. Objects presented in 3D interfaces can provide information in form of shapes, colors, textures, positions, sizes, orientations, and even behavior. Importantly, 3D space is not limited in size, so the only limiting factor is user perception.
Considerable work was done in the NIRVE project presented in [Sebrechts et al. 1999; Cugini et al. 2000; NIRVE]. The NIRVE system visualizes retrieved documents on the surface of a sphere and performs document clustering according to ‘concepts’, which are defined by keywords. The main drawback of this system is a small informational capacity of the interface – only a limited number of documents can be visualized in a readable way. Another problem is the low number of available visualization dimensions (degrees of freedom) within the interface. The system uses only one relevancy metric and does not present multiple search result data dimensions.
However, moving into a 3D world brings also some difficulties. The most important difficulties are occlusions, complex navigation, limitations of user perception, presentation of 2D data, and submitting user queries. Mainly for these reasons, most of the previous attempts to build a pure 3D search system were not really successful. In this paper, we describe how a 3D search interface can be build in a hybrid approach taking the best of the two worlds: 2D interfaces and 3D scenes. We describe a new method of interactive adaptive 3D visualization of search results returned by indexing search engines. In this method, called AVE, different types of interfaces are used: holistic interfaces for presentation of aggregated data, analytical interfaces presenting details of the particular documents found, and hybrid interfaces. We also describe a prototypical system, called Periscope, based on the AVE method. The Periscope system has been deployed and is accessible on the Internet.
Another research system is VR-VIBE [Benford et al. 1995; VRVIBE]. The system visualizes documents in a 3D space and permits to visually set the threshold level of the relevancy criterion, which controls how relevant document must be in order to be visualized. Main drawbacks of this system are: only one visualization environment and small number of presented search result properties. Some projects resulted in systems available on the Internet. One of them is a graphical system supporting a user in Web searching called InXight [INXIGHT]. Using a category tree principle and quasi-3D visualization method (fish-eye zooming), InXight interface permits browsing and keyword searching within displayed category hierarchy. The system, however, does not support a full-text searching of documents within the hierarchy; therefore, it is mainly used to present maps of websites.
The reminder of this paper is organized as follows. In Section 2, a short overview of related projects is presented. In Section 3, the AVE method is explained. In Section 4, the Periscope prototype system, its architecture, components, and the interface selection method are described. In Section 5, example interfaces are presented. Finally, Section 6 summarizes the paper.
2
Another relevant system available on the Internet is ViOS [Tyson 2000]. ViOS directory resources are presented as a planet-like landscape. The visual capacity of this system is big, however, it is achieved at the cost of the requirement to install heavyweight software (ca. 76 MB) and the need to use a built-in web browser incompatible with commonly used web browsers. Again, the system offers only a single visualization environment.
State of the Art
There are a number of projects concerning 2D and 3D graphical visualization of data retrieved from search engines. Most of the projects that use 3D interfaces were started in mid 90’s with the appearance of the VRML language for describing 3D scenes. The most prominent examples of 2D and 3D graphical visualization systems are shortly described below.
Substantial work regarding visualization of information has been also done in the Starlight system [STARLIGHT], which provides dedicated 3D interfaces for different application domains. The main drawbacks of the Starlight system are its limited accessibility for end-users and dedicated software required on both the client and the server side.
Numerous projects have been conducted in the Human-Computer Interaction Labs [HCIL], mainly addressing visualization of large amounts of data [Fekete and Plaisant 2002]. While these projects refer mostly to two-dimensional interfaces, results obtained are also relevant to visualization in 3D interfaces including elements such as clustering, attribute mapping, etc.
3
A specialized system for visualization of news feeds is described in [Rossi and Varga 1999]. In the proposed interface, documents are displayed in a form of a city-like landscape. The system introduces multi-criteria visualization of retrieved data, however, mapping of data dimensions to visual primitives is fixed. The interface is build as a standalone Java3D application, so a client
The AVE Method
To overcome the problems faced by systems designed up to now, a new approach to graphical presentation of search results is required. To this end, a new method, called AVE (Adaptive Visualization Environments), has been developed [Wiza et al. 2003.07].
30
The novelty of the AVE method consists in visualization of the entire search result in an automatically selected interface that best fits the search result characteristics (see Figure 1). The amount of information returned by an indexing search engine in response to a user query may vary significantly. The search engine may return information about several documents or several hundreds of thousands of documents. Consequently, it is not possible to create a single 3D environment capable to visualize the entire spectrum of possible search result volumes. In the AVE method, the visualization system selects from a collection of available visualization interfaces the one that best describes the search result. URL
No. of keywords
this path, a user is supported by an interface selection logic that helps him/her to select appropriate interfaces. Interface selection logic
Interface A
URL
Modification date
Analytical Interface
URL URL URL
URL URL
URL
URL
URL
English Movies
Interface B
URL
N
Web browser
N
To support the exploration process with regard to the search result volume, three types of interfaces are used in the AVE method: holistic interfaces, analytical interfaces and hybrid interfaces [Wiza et al. 2003.06]. A holistic interface is used in case of voluminous search results, where detailed presentation of all documents would cause interface overloading and illegibility. In a holistic interface, entire search result is categorized according to one or several criteria. In such interface, a 3D object represents a group of documents sharing the same value of the categorization attribute. Categorization criteria may be chosen automatically or may be pre-selected by a user.
URL
URL
Interface selection
Interface C
Figure 2. Example exploration path
URL
Search result
Interface B
Web browser
Doc size
User query
Interface A
Polish
Hybrid Interface
Images Other
HTML
Analytical interfaces present the search result in details, therefore are best fitted for comprehensive analysis of visualized data. Analytical interfaces are applied when the number of documents found is relatively small. Having a small number of documents visualized, a user may focus on examination of various properties of the documents. To support this kind of analysis, each 3D object in an analytical interface represents a single document and the object properties reflect document attributes.
Docs
Holistic Interface
Figure 1. Selection of the visualization interface The process of interface selection may be fully automatic, semiautomatic, or manual. Automatic interface selection is based on quantitative properties of the search result. The goal is to maximize the readability of the interface. Search result analysis may include such features as the overall number of documents, the number of different document languages, the number of different file types, the number of links to and from other documents, the quantity of matching keywords in each document title or body, position of keywords within documents, most frequent phrases, etc. Semi-automatic and manual modes allow a user to influence the selection process.
A hybrid interface is an interface where aggregated and detailed aspects of a search result are presented simultaneously. In a hybrid interface some objects represent documents grouped according to a categorization criteria, while other objects display detailed information about particular documents. Different search results may be visualized in different interfaces, but a user may also apply different interfaces to the same search result in order to focus on specific properties of the result. Visualization may be further customized by flexible assignment of search result attributes to visual properties of the interface objects (see Figure 3). In the AVE method, a user may decide how a particular attribute of the search result is presented within the interface (e.g., by specifying that colors of objects represent types of documents).
In the AVE method, there are interfaces presenting a global view of the retrieved search result, as opposed to the standard web search interfaces, where only a limited number of documents, grouped on subsequent pages, are presented. Therefore, the amount of interaction a user must perform to obtain an overall view of search result is significantly reduced. Moreover, all documents are presented in a single environment, so a user may easily perceive patterns in the result.
URL
The process of searching for a document of interest may be seen as a number of subsequent user queries, where the next query may narrow or broaden the previous search result. Using different levels of abstraction and applying the most appropriate 3D environment in each step, the AVE method permits to navigate from a high-level aggregated categorized view of the entire search result, through categorized views of sub-results, up to precise visualization of information about particular documents of user interest. A user may re-formulate queries textually (e.g., by adding and removing keywords) or by interaction with 3D elements of the visualization environment (e.g., selecting objects by moving them to a predefined area). This multi-step process may be seen as a path through the visualized search results (see Figure 2). Along
URL
URL URL
URL URL
URL
URL
URL
URL English Polish German
Mapping change
Image Movie
color
Figure 3. Selection of the search result attribute mapping
31
4 4.1
over other Web technologies for dynamic content creation, such as Web page generators or the XSLT. Applications can use X-VRML processing on both the server- and the client sides. High-level VR-specific elements such as classes or event handlers make the application code significantly shorter and more readable. Direct access to databases supports both data visualization and persistency. The language is by design extensible allowing creation of advanced domain-specific modules that simplify the process of designing advanced 3D applications.
Periscope System System Overview
Based on the AVE method, a prototype system, called Periscope, has been built [PERISCOPE]. The Periscope system is an intermediary layer between a user searching for documents on the Web and the indexing search engines. Its role is to enhance the efficiency of the Web searching process by providing tools for 3D graphical visualization of web search results and interactive manipulation of the Web search result including query refinement.
4.3
The position of the Periscope system in the web searching process is presented in Figure 4. A user submits queries to the Periscope system. These queries are expressed by the use of textual data (e.g., keywords) and/or by interaction with visual objects constituting the virtual scene that represents previous web search result. The queries are translated by the Periscope system to conform to the search engine query language and are sent to a cooperating indexing search engine. In response, the search engine sends back the search result to the Periscope system that visualizes it as a 3D virtual scene. The visualization engine of the Periscope system uses Interface Models to automatically generate 3D virtual scenes and to create necessary 2D user interfaces. Selection of the appropriate Interface Model for a particular search result is made according to an algorithm implemented in Metamodels.
A Metamodel is an X-VRML program used by the Periscope system to select the Interface Model that should be used when the system receives a request for the Interface Model change. Such a request contains the identifier of the Metamodel. As a result of Metamodel execution, appropriate Interface Model is loaded. The Metamodel may use both static and dynamic routes between models. In the case of static routes, the next model depends only on the current model, i.e. for a given current model the next model will be always the same. The static routes are “hardwired” in the Metamodel. In the case of dynamic routes, the next Interface Model is determined based on some algorithm in the Metamodel, data retrieved from a database, and parameters provided by a user. Dynamic routes permit selection of the subsequent Interface Models in a fully automatic way or with the assistance of a user. In the later case, two methods of user support may be employed: a user may choose a set of Interface Models from which the best one is automatically selected or the system provides a user with a set of pre-selected models and the user chooses the best one according to his/her preferences and experience.
Periscope system MetaModels
Indexing Search Engine
search engine query search engine response
Interface Models
Visualization Engine
user query
A Metamodel can be provided with input parameters passed from the system or from the user interface. The Metamodel may also pass X-VRML variables to a selected Interface Model. In particular, a Metamodel may exchange with an Interface Model the result of a query (search result). This significantly reduces the burden generated to the cooperating search engine.
User visualized results
Figure 4. The position of the Periscope system in the web searching process
4.2
Metamodels
4.4
Dynamic Modeling
Interface Models
The actual user interface of the Periscope system is dynamically created by the visualization engine based on the Interface Models [Walczak and Wiza 2002]. The interface consists of two parts: 3D and 2D. The 3D interface is displayed in a VRML browser. The 3D interface permits a user to examine the virtual scene that visualizes a search result. The 2D interface contains standard elements such as text fields, buttons, checkboxes, etc. and complements the 3D interface with functions difficult to implement in a pure 3D environment (such as entering keywords).
The Periscope system is largely based on the X-VRML dynamic modeling approach. This approach enables the development of interactive and dynamic database-driven 3D applications. Parameterized models of the 3D virtual scenes that constitute an application are used to dynamically generate the final instances of the 3D scenes taking into account also data retrieved from databases, current values of model parameters, a query provided by a user, and user privileges and preferences [Walczak and Cellary 2002; Walczak and Cellary 2003; X-VRML].
An Interface Model is an abstraction of a class of interfaces. Each Interface Model is composed of three parts as presented in Figure 5: a 3D Scene Model, an Interaction Interface and a 2D User Interface Model. The 3D Scene Model is an abstraction of a set of 3D scenes, while the 2D User Interface Model is an abstraction of 2D user interfaces. A 3D scene and a 2D user interface generated from the same Interface Model are interrelated. The Interaction Interface defines possible interactions between a user, the visualization system, the 2D user interface, and the 3D scene.
The models of virtual scenes are encoded in a high-level XML-based language, called X-VRML, which extends scene description standards such as VRML and X3D. The language enables parameterization of virtual scenes, enables content selection, and provides convenient methods of accessing databases. The design of the X-VRML language reflects the requirements of dynamic 3D applications, and consequently, offers advantages
32
Two types of visualization dimensions may be distinguished: classifying visualization dimensions GC and presentation visualization dimensions GP. Classifying visualization dimensions are used to distinguish groups of documents according to some classification criteria (e.g. language or document size). Each group (class) of documents is represented by a 3D object (glyph). Therefore, to distinguish between different document classes (i.e. glyphs), a classifying visualization dimension must have a discreet domain.
Interface Model 3D Scene Model
3D Scene Appearance Visual Object Actions Interaction Interface
Attribute-to-Parameter Mapping Initial Actions Listeners
Presentation visualization dimensions are used to present properties of the group of documents in a given class (e.g., the number of documents in the class or the most frequent author name). A presentation visualization dimension domain may be either discreet or continuous. In the latter case, the ci value may be defined by specifying an upper boundary of the range (in order to maintain the interface readability) but may be as well infinite.
2D User Interface Model
Visual Controls Visual Control Actions
Figure 5. The Interface Model elements A 3D Scene Model defines the following elements: x x
The interface determinant ' is defined as a pair ({Ø}, K), where {Ø} is the set of all facets in the interface, and K is the maximum number of objects (in an interface satisfying readability prerequisites) representing documents or groups of documents.
dynamic model of the 3D scenes, and specification of actions associated with objects.
An Interaction Interface defines the following elements: x attribute-to-parameter mapping, which defines possible assignments between search result attributes and Interface Model parameters, x initial actions, i.e., actions performed during the interface instantiation, and x listeners, which bind a 3D visual elements (nodes) with actions.
During the interface selection, for a given search result, first, an attempt to use an analytical interface is made. If a search result is too big (i.e., for each interface K < |URLs|), a holistic interface is being selected. To determine which holistic interface should be used for a given search result, a set T of multidimensional aggregation tables t is created over the document attributes (A1,…,Ak), where k is specific for a given search engine and denotes the number of different document attributes. Each field of the aggregation table contains a set D of {d1,…,dr} identifiers of documents that conform to the (a1,…,ak) vector of classification constraints defined over the domain of Ai, for each 1 d i d k. In Figure 6, the concept of the aggregation table is presented.
A 2D User Interface Model defines the following elements: x x
visual controls, their arrangement and behavior, and visual control actions, which are system functions exposed to the interface.
4.5
Interface Model Selection A3:Document Type
In the Periscope system, selection of interfaces is based on the interface readability paradigm [Wiza et al. 2003.06]. The paradigm describes a set of prerequisites, which must be fulfilled by an interface in order to consider this interface readable for a user.
Text
DOC HTML PDF
0-100B
A2:Document Size
100-500B
While it is not possible to calculate readability factor for an existing interface, it is possible to define a set of constrains that must be met by a readable interface. In order to enable automatic ranking of interfaces for a given set of search result properties, the interfaces registered in the system must be formally described by a set of interface properties called facets.
{d3,d7, d11 d14, d15, d19 d25,d28,...}
500B-1kB
1-10kB
10-50kB
50-100kB
1. 2.
Dutch
French
Spanish
German
Polish
English
Italilan
100kB-1MB
An interface facet describes readability conditions for a single dimension of the interface (e.g. object color, object position). An interface facet Ø is a pair (Gi, ci,), where:
A5:Document Language
Gi is a visualization dimension,
ci is visualization dimension capacity, i.e. the maximum number of distinguishable values that may be assigned to the visualization dimension Gi in a particular interface model, and i [1, n], where n is the number of facets.
Figure 6. Example 3-dimensional aggregation table Each aggregation table in T is built on a different subset of the set (A1,…,Ak) attributes. Domains of attributes having quasicontinuous domains, like modification/creation date or document
33
size, are initially partitioned into predefined subdomains (e.g., document sizes 0-100B, 100B-0.5kB, 0.5-1kB, …). Such division based on fixed ranges brings rich semantic information to a user, as opposed to dynamic subdomains with balanced number of documents in each subdomain.
If there is no triple that fulfills the above conditions, each table in the set T is modified in such way, that for the attribute that has the biggest number of classification constraints, a number of constraints are grouped into one specific constraint other. For instance, after such grouping, the attribute ‘Document language’ in Figure 6 will consists of the following values: {English, Italian, Polish, German, Other Languages}. For such modified aggregation tables, the matching procedure is performed again.
Attributes of the aggregation tables are assigned to classifying visualization dimensions, while attributes of the document set D (or values of functions over D) are assigned to presentation visualization dimensions. An interface using the Euclidean space with three dimensions may have maximum three classifying visualization dimensions, therefore the set T consists only of 1-, 2and 3-dimensional tables (note however, that the Euclidean dimensions x, y, and z may be also used as presentation visualization dimensions). This assumption significantly reduces the computational complexity of the presented method. During practical studies, it has been verified that users prefer holistic interfaces with one or two classifying dimensions and minimal number of presentation dimensions.
4.6
The internal architecture of the Periscope Visualization Engine is presented in Figure 7. The Visualization Engine modules are distributed between the server and the client. Both the server and the client are based on the X-VRML technology. The server is responsible for processing Metamodels and static (non-interactive) elements in the 3D Scene Models. Processing of these parts may require database access, which is efficiently performed by the server. The client is responsible for processing Interface Models and dynamic (interactive) elements in the 3D Scene Models, which may be performed only on the client-side.
During the interface selection process, the logic of the visualization engine tries to find the best match between a table t T, an Interface Model im, and a mapping P of attributes of t to visualization dimensions of im. For this purpose, each of the aggregation tables in T is matched against all the determinants of available Interface Models taking into account that: x the number of attributes used to create table t must lower or equal to the number of Interface Model im facets; the numbers should be as close possible; x the total number of fields in t must be lower or equal to the K value of im; the number of fields should be as close as possible to this value; x the number of different values or ranges used for classification of each attribute in table t must be lower or equal to ci value of the visualization dimension Gi mapped to this attribute.
The Periscope Server is composed of the following modules: x x x
Metamodel
Search Engine Manager, Search Result Cache, and X-VRML Servlet with Server X-VRML Processors.
The Periscope Client is composed of the following modules: x x x x x x
If two or more triples (t, im, P) have the same (maximum) value of the ranking function described above, the decision, which triple should be used is either arbitrary or is left to a user.
Model Model
Implementation of the Periscope Visualization Engine
Server Communication Manager, 2D User Interface Manager, X-VRML Interpreter with Client X-VRML Processors, Interaction Manager, 2D User Interface, and 3D Virtual Scene Interface.
Model Model Interface
HTTP
Model HTTP
Periscope Applet
Periscope Server
Interaction Manager X-VRML Servlet Search Engine or database
HTTP or JDBC/ ODBC
Search Engine Manager
HTTP
HTTP
Server Communication Manager
2D User Interface
Server X-VRML Processors
User
X-VRML Interpreter
Sarch Result Cache
Search engine tier
2D User Interface Manager
3D Virtual Scene Interface
Client X-VRML Processors
Application server tier
Client tier
Figure 7. Components of the Periscope visualization engine
34
4.6.1.
Two actions important for the implementation of the AVE method are: (1) loading a new Interface Model selected according to rules included in the Metamodel (changeModel action) and (2) processing the 3D Scene Model, i.e. creation of the 3D scene (processCurrentModel3D action). These actions are described below.
Periscope Server
The main element of the Periscope Server is the X-VRML Servlet with Server X-VRML Processors. The Search Engine Manager (used by some of the X-VRML processors) is responsible for passing user queries to the Search Engine (or database) and for retrieval of the search result. A query obtained from the Periscope Client is translated to the query language of the particular Search Engine or to a database query expressed in the SQL language. Retrieved search result is used by the X-VRML Servlet (processors) during the Metamodel and 3D Scene Model processing. A copy of the search result is also stored in the Search Result Cache for performance improvement.
4.6.2.
The changeModel action, as any action, may be launched from the 2D or the 3D interface (see Figure 8). The Interaction Manager requests the Server Communication Manager to send a request to the Periscope Server. The Periscope Server retrieves a Metamodel and the X-VRML Servlet interpreter processes it in order to determine the Interface Model to be loaded. The name of the Interface Model to be loaded is returned as a value of the next_model variable in the response sent to the Periscope Client.
Periscope Client
In the next step, the Server Communication Module requests from the Periscope Server retrieval of the Interface Model specified by the next_model variable. Requested model is retrieved by the X-VRML Servlet and sent without parsing to the Periscope Client. On the client-side, the Interface Model is passed to the 2D Interface Manager, which dynamically creates a 2D interface according to the specification found in the model. Creation of the 2D interface finishes the changeModel action. Note that this action performs only Metamodel processing and 2D interface creation. The 3D Scene Model is not processed and the 3D scene is not created.
The Periscope Client modules, except the 3D Virtual Scene Interface, are implemented in a Java Applet embedded in an HTML page. The 3D Virtual Scene Interface is a standard VRML browser. The Server Communication Manager is responsible for maintaining communication between the client and the server tier, i.e. sending HTTP requests to the X-VRML Servlet and retrieving server HTTP responses. The response may contain Interface Models (pre-processed or not), X-VRML variables, configuration files, etc. All data is packaged in XML envelopes. The Server Communication Module is responsible for unpacking data from XML and passing them to the 2D User Interface Manger and X-VRML Interpreter. The 2D User Interface Manager retrieves interface definitions from the Interface Model and dynamically generates 2D User Interfaces. The 2D User Interface allows users to specify queries, refine queries, change the mapping of search result attributes to 3D scene visualization dimensions, and select Interface Models.
When a model is loaded, the processCurrentModel3D action may be originated by the 2D or the 3D interface. This action causes the Server Communication Manager to request the currently loaded 3D Scene Model to be processed on the server (see Figure 9). On the server side, the X-VRML Servlet retrieves the 3D Scene Model and processes it using data passed from the Periscope Client. These data include variables from the 2D User Interface, variables stored during previous model processing, and variables set as parameters of the action call. Processing of the 3D Scene Model involves processing of a user query, retrieval of the search result, calculations, etc., as programmed in the X-VRML code. Processing of the 3D Scene Model on the server is limited to static (non-interactive) elements of the model. A pre-processed 3D Scene Model and variables set on the server-side are sent back to the Periscope Client, where the Server Communication Module passes it to the client-side X-VRML Interpreter. The X-VRML Interpreter performs client-side processing of the dynamic (interactive) elements of the 3D Scene Model and passes resulting 3D scene to the 3D Virtual Scene Interface.
The X-VRML Interpreter acquires pre-processed 3D Scene Models from the Server Communication Manager and further processes them in order to create the 3D Virtual Scene Interfaces. A 3D Virtual Scene Interface permits a user to navigate and interact with a 3D scene visualizing the search result. All user interactions, originating from both the 2D User Interface and the 3D Virtual Scene Interface are processed by the Interaction Manager, which activates appropriate Periscope Client modules. Each user interaction passed to the Interaction Manager launches an action, which may engage any module of the Periscope Client and, consequently, the Periscope Server. Available actions include loading a new Interface Model, processing the current 3D scene model, creation or removal of the 2D interface, changing of mapping, setting a variable, activating a viewpoint, etc.
35
Figure 8. Interactions between Periscope components during the changeModel action
Figure 9. Interactions between Periscope components during the processCurrentModel3D action
36
4.7
Searching with the Use of Periscope
Figure 11. Attribute-to-visualization dimension mapping window
5
Interface Examples
A number of Interface Models have been implemented for the Periscope system [Wiza et al. 2003.06]. Examples of holistic, analytical, and hybrid as well as specialized interfaces are presented below. In Figure 12, a holistic interface with two visualization dimensions is presented. In this interface, each sphere represents a set of documents in a particular language, while sphere slices represent distribution of documents within the DNS domains. Mapping of attributes to visualization dimensions is flexible and the same interface may be used differently: subsequent spheres may represent a number of keywords found in the document (1, 2, 3…) while colored slices may represent a number of documents in a particular format (e.g., pdf, txt, html, doc).
Figure 10. Periscope activity diagram In Figure 10, the activity diagram of the Periscope system is presented. A user specifies a query, which is sent to a search engine. The visualization system retrieves the search result and with- or without user interaction, appropriate visualization interface is being selected. Within the selected interface, a user chooses the method of visualization of the search result attributes by assigning them to particular visual properties of the interface. Figure 12. Example holistic interface (see also color plate)
With these data, the visualization system creates an interface, which consists of a 3D scene and a 2D panel. The user may select another visualization interface, specify a new query within the retrieved results, or refine the query, i.e. create a new set of constraints to retrieve information form the search engine. A user may also proceed to a particular document of interest.
In the Figure 13, an example analytical interface is presented. This interface uses a metaphor of a multilevel store to enable visualization with a high number of different dimensions. The following visualization dimensions are used in this interface: object positions in x, y, and z-axes, object color, size, shape and blinking (temporary color change). Therefore, by the use of this interface, seven different attributes may be visualized at the same time, permitting a user a detailed analysis of the result. In the presented example, glyph colors represent the document language, position on the x-axis represents the modification date, y-axis represents the host address, z-axis represents the document size, while shape represents the file type. If a URL has been re-indexed within last 10 days (a user controllable parameter), the corresponding glyph blinks.
During interface instantiation, the default mapping between attributes of the search result and visualization dimensions of the interface is applied. During interface exploration, a user may change the mapping by the use of a special window (see Figure 11). The selected mappings are passed to the system and the visualization with appropriate attribute-to-visualization dimension mapping is performed.
37
document. Tiles with the same color represent documents found on the same website. The interface presented in Figure 15 permits also a comparative visualization of search results for two different queries (each set of coaxial cylinders represents one query).
Figure 13. Example analytical interface (see also color plate) While the above-presented interfaces can be clearly categorized as holistic or analytical, the interface presented in Figure 14 can be used as either holistic or analytical. This interface has three visualization dimensions: position on x-axis, glyph color, and glyph size. Each glyph may represent a group of documents (holistic view) or a single document (analytical view). To allow a user to distinguish between these two views, in an analytical interface each glyph is tied with a visual tag being an anchor to the document. This interface provides a user with enhanced interaction possibilities. Each glyph may be shifted along the axis and therefore a user may group objects of interest to work with.
Figure 15. Example hybrid interface (see also color plate) The Periscope system may be also equipped with specialized interfaces, which enrich the exploration possibilities within the search result. In Figure 16, an example interface used to visualize documents found on a single website is presented. Properties of each document may be expressed by the use of the following visualization dimensions: glyph color, size, shape, and the color and length of the join.
Figure 14. Example universal – holistic or analytical – interface (see also color plate)
Figure 16. An example of a specialized interface – a host view (see also color plate)
In Figure 15, an example hybrid interface is presented. This interface joins holistic, aggregated view of documents grouped by domains (each coaxial cylinder represents a different domain) with analytical, detailed view of all documents (each tile on the cylinders represent one document). Additional visualization dimension – object color – is used to represent the website of the
Another example of a specialized interface is shown in Figure 17. This interface permits a user to browse images found in selected documents or on a single host. Images may be moved and enlarged for detailed examination. A user may also go to the original image by the use of the associated anchor.
38
completeness of the results and understandable form of presentation proved to be worth the short delay. The future works are focused on improvements in the interface selection method and development of new interfaces. It is expected that the interface selection algorithm may be improved by registration of user interactions, such as preferred visualization interfaces and options selected in particular interfaces. The development of new interfaces will be based on the analysis of existing interfaces with respect to their ergonomics and suitability of the applied visualization metaphors.
References ANDERS P. 1999. Envisioning Cyberspace. McGraw-Hill. BENFORD, S., SNOWDON, D., GREENHALGH, C., INGRAM, R., KNOX, I., AND BROWN, C. 1995. VR-VIBE: A Virtual Environment for Co-operative Information Retrieval. Eurographics'95, 349-360. Figure 17. An example of a specialized interface – image browser
6
CUGINI, J., LASKOWSKI, S., AND SEBRECHTS, M. 2000. Design of 3D Visualization of Search Results: Evolution and Evaluation. Proceedings of IST/SPIE's 12th Annual International Symposium: Electronic Imaging 2000: Visual Data Exploration and Analysis.
Conclusions
The Periscope system provides a novel user interface for web search engines. The system provides a user with a global view of the classified entire search result instead of just a few documents that are claimed to be the most relevant. A user may visualize the same search result in many different ways to better understand the nature of the obtained information. Then – in an interactive multi-step process – a user may refine his/her query and apply an appropriate visualization method, eventually finding the required documents.
FEKETE, J. AND PLAISANT, C. 2002. Interactive Information Visualization of a Million Items. Proc. of IEEE conference on Information Visualization 2002. Boston. HCIL website. http://www.cs.umd.edu/hcil/ INXIGHT website. http://www.inxight.com/map/
The Periscope system is built on top of an indexing search engine. As opposed to directories, the indexing search engines usually have a higher number of stored URLs, thus the search capabilities of the Periscope are high. In the Periscope system, the search result can be presented at different levels of abstraction using different levels of detail. For instance, in the first step the search results may be presented grouped by language domains, then, in the second step grouped by sites, finally, in the most detailed view, a user may see single URLs.
MUKHERJEA, S., AND HARA, Y. 1999. Visualizing World-Wide Web Search Engine Results. International Conference on Information Visualisation July 14 - 16, 1999 London, England. NIRVE project website. http://www.itl.nist.gov/iaui/vvrg/cugini/uicd/nirve-home.html PERISCOPE website. http://periscope.kti.ae.poznan.pl/
Important features of the Periscope system, which support a user in the process of search result analysis include: x model switching – a user can switch from one model to another to visualize different aspects of the search result; x attribute mapping – a user can freely assign search result attributes to visualization dimensions, and therefore, modify the method of visualization to highlight important features of the search result; x comparative searches – using special models, a user can compare in a single 3D scene search results of two or more different queries.
ROSSI, A.M., AND VARGA, M. 1999 Visualization of Massive Retrieved Newsfeeds in Interactive 3D. International Conference on Information Visualisation July 14 - 16, 1999 London, England. SEBRECHTS, M., VASILAKIS, J., MILLER, M., CUGINI, J., AND LASKOWSKI, S. 1999. Visualization of Search Results: A Comparative Evaluation of Text, 2D, and 3D Interfaces. Proceedings of SIGIR'99, 3-10.
First trials of the Periscope system connected to a custom search engine database containing information about approximately 70% of sites within the Polish Internet domain (.pl) proved that the AVE method can be efficiently used for Web searching. Although, the system response time is usually higher than in case of popular search engines (like Altavista or Google) reaching up to 15 seconds for some complex queries, the informational
STARLIGHT website. http://starlight.pnl.gov/ TUFTE, E. 1990. Envisioning Information. Graphics Press. TYSON, J. 2000. How ViOS works? http://www.howstuffworks.com/vios.htm
39
VR-VIBE website. http://www.emptiness.org/vr/vrvibe.html
WALCZAK, K. AND CELLARY, W. 2003. X-VRML for Advanced Virtual Reality Applications. IEEE Computer, 36(3), 89-92.
WALCZAK K., WIZA, W. 2002. Building Dynamic User Interfaces of Virtual Reality Applications with X-VRML. IFIP TC6/WG6.4 Workshop on Internet Technologies, Applications and Societal Impact WITASI 2002, Kluwer Academic Publishers, 45-59.
WIZA, W., WALCZAK, K., CELLARY, W. 2003.06. Adaptive 3D Interfaces for Search Result Visualization. IADIS International Conference e-Society 2003, 365-372. WIZA, W., WALCZAK, K., CELLARY, W. 2003.07. AVE - A Method for 3D Visualization of Search Results. 3rd International Conference on Web Engineering ICWE 2003. Springer-Verlag, 204-207.
WALCZAK, K. AND CELLARY, W. 2002. Building Database Applications of Virtual Reality with X-VRML. Proceedings of Web3D 2003 Symposium 7th International Conference on 3D Web Technology, Tempe, Arizona, USA, 111-120.
X-VRML website. http://xvrml.kti.ae.poznan.pl/
40
Periscope – A System for Adaptive 3D Visualization of Search Results: Wiza, Walczak, Cellary
Figure 12. Example holistic interface
Figure 15. Example hybrid interface
Figure 13. Example analytical interface
Figure 16. An example of a specialized interface – a host view
Figure 14. Example universal – holistic or analytical – interface
180