Best Practices in Search User Interface Design - Semantic Scholar

0 downloads 0 Views 644KB Size Report
resulting pages that search engines include in the output list. Most include ..... with fellow team members. ... Association of Computing Machinery: New York, NY.
PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 46th ANNUAL MEETING -- 2002 Proceedings of the Human Factors and Ergonomics Society46th Annual Meeting -- 2002

BEST PRACTICES IN SEARCH USER INTERFACE DESIGN Panel Chairs: Marc L. Resnick and Jennifer Bandos Florida International University Miami, FL (305) 348-3537 [email protected] The Internet has become a powerful tool for information search and ecommerce. Millions of people use the World Wide Web on a regular basis and the number is increasing rapidly. For many common tasks, users first need to locate a Web site(s) containing needed information from among the estimated 4 trillion existing web pages. The most common method used to search for information is the search engine. However, even sophisticated users often have difficulty navigating through the complexity of search engine interfaces. Designing more effective and efficient search engines is contingent upon a significant improvement in the search user interface. PANEL SUMMARY There are several purposes for using the Web, including publishing information in an electronic format, retrieving information, conducting and facilitating commerce, and others. The search engine has become the most common method used to search for information over the Internet, with over half of web users using Internet search engines each week and sixty percent searching for over an hour each week. In one study, only 18% of users reported that they could fmd what they are looking for on the web. Also, 67% said they were frustrated when searching. In Sullivan (2000), 2 1% of the respondents reported being able to find what they were looking for every time, and only 60% reported finding the relevant information most of the time. Aurelio (2000) reported that users need better search interface usability. In that study, users specifically requested help identifying command rules, narrowing searches, and categorizing results. Clearly, work is needed to improve search engine design. The human factors contributions to search engine design can be divided into at least three categories: input design, output design, and support for iterative search. Input There are many design considerations related to the structure of the input interface. One of the chief problems is that typical search engine interfaces do not provide any indication of their searching rules, or even that any rules exist (Monaghan and Andre, 2000). Command design is a rich area for research. Jansen and Pooch (2000) reported that few studies found much use of Boolean or proximity operators, stem searching, or fuzzy search. Typical query lengths were generally only one or two keywords. Technical users were more likely to use compound query structures, but levels still remained below 50%. Learning the various command rules of each search engine is not perceived as valuable because users tend to be very task focused when using search engines (Monaghan and Andre, 2000). Stempfhuber (2001) suggests that users should be provided with a preview of the size of the results set along with a dynamic query interface that can be used to adjust the query until the results set is in the desired size. Fang and Salvendy (2001) describe a dynamic input

interface where users manipulate sliders and filters to adjust the expected result set size. Sliders can be used for parameters such as date of posting. Filters can be used to modify or add additional keywords. output There is little consistency in the characteristics of the resulting pages that search engines include in the output list. Most include a title and description, but the source is not always clear. Other fields are included by some search engines but not others. Some may support the user’s search task, but others may simply clutter the interface, or even impede the search process. The objective of the search site should be to include the characteristics that support effective searching and to eliminate those that impede or confuse users. Iterative search support Many studies (eg. Proper and van der Weide, 2001) have found that search engine users typically have only vague conceptions of their information need when they begin searching. Combining this with the relatively low probability of finding the best match on the first try, support of iterative search can significantly enhance the successful use of search engines (Light, 1997). This can be accomplished in several ways. Simply presenting the previous query on the results page would be an improvement to any search interface that fails to do so. In this way, users can modify the previous search without navigating back to the input page. Proper and van der Weide (2001) suggest that users can be asked follow up questions about their query when there are multiple interpretations or the result set is either too large or too small. Users can also be asked to rate results according to the quality of the match and a new search can be generated accordingly. Lundquist, Grossman and Friedler (1 997) report on a relevance feedback mechanism that can be used to improve and reduce the results set. Relevance can be quantified by allowing users to weight keywords or by adding new terms based on an evaluation of current results. Users can also be asked to rate a few resulting pages based on their appropriateness and a new search can be generated fiom an analysis of these pages. Keywords from the highly rated

627

628

PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 46th ANNUAL MEETING -- 2002 Proceedings of the Human Factors and Ergonomics Society46th Annual Meeting -- 2002

pages can be added and keywords from the negatively rated pages can be filtered out. McEvoy (200 1) suggests a more empirical approach. He reports on a method to mine previous searches for hints as to the intentions of users. This data can then be used to present hints to future users that fail to retrieve any acceptable results during their search. The Panel The panel is composed of experts from academia and industry who have been involved in evaluating and designing search engine interfaces in a variety of domains. The academic experts will present best practices in the design and evaluation of search interfaces. The industry experts will present case studies illustrating how search interfaces were developed at their organization.

DESIGNING THE OUTPUT INTERFACES FOR INTERNET SEARCH - BEST PRACTICES Rebeca Lergier Consultant Usability Solutions Miami, FL Designing the output for search engine interfaces is a challenging task. There are several key issues to consider. Many commercial Internet-wide search engines overload users with thousands of results. Of course, the user is unlikely to examine more than a few results pages. A better strategy would be to customize the interface so users can find the information they are seeking in a more non-linear and efficient fashion. For example, users can customize the design of the output to include only a specific set of fields such as size and date (Lergier and Resnick, 2001) or sort the results using these fields. This allows users to search the result list faster and concentrate their search on the descriptors that are important to the specific task at hand. The output design can be graphically organized to highlight key fields of information rather than presenting textonly lists of page descriptions. Graphical design can support users in scanning results lists using fields that they may not have included in the original query. Resnick, Maldonado, Santos, and Lergier (2000) present an output design based on a tabular alternative to the typical list format used by most search engines. Though manipulating only layout rather than content, this tabular design not only increased the speed at which users could find an acceptable result page, but fundamentally altered the strategy that users adopted to search through the entire set. Woodruff, Faulring, Rosenholtz, Morrison, and Pirolli (200 1) suggested using thumbnails to represent results instead of summary lists. In this way, users would get a better conception of the type of page that would be retrieved, leading to more accurate selection of results pages. Textual annotation of these thumbnails can lead to additional improvement. Another strategy to help users find information faster is to organize the results into folders according to keywords that

they did not include in their search query (Fang and Salvendy, 2001). Unfortunately, users generally only include one or two keywords and often misuse the Boolean operators they use (Bandos and Resnick, 2002). Using additional keywords to organize the results could compensate for short queries. Silverman (200 1) recommends organizing the results into foIders when there are too many to present on one page. Folders can be selected according to multiple interpretations of the search query, types of pages identified, date or size variables and/or other characteristics. In this way, users can select the interpretation that best matches their information need. Chen and Dumais (2000) tested such a design and found it to achieve superior objective and subjective performance. Some existing search engines, such as VivisimoTM,currently employ this technique. Gauch and Wang (1 997) present a more information intensive enhancement that involves a corpus analysis that evaluates the similarities of the untagged text in the resulting pages. A database can be generated based on full text occurrence patterns. Searches can be automatically expanded to retrieve additional results based on conceptual similarity rather than keyword count. This presentation will include descriptions of several cufrent and emerging methods for improving search effectiveness using output design. Each method will be evaluated for its ease of use by the searching public, complexity of implementation, and potential benefit to search. DESIGNING SEARCH UIS FOR A DIVERSITY OF USERS: THE CASE OF ORACLE’S SEARCH U1 GUIDELINE Misha W. Vaughan Senior Usability Engineer Oracle Corporation Redwood Shores, CA

Betsy Beier Principal User Interface Designer Oracle Corporation Redwood Shores, CA

As part of the effort to consolidate a common look-andfeel across Oracle’s applications suite, the Usability and Interface Design group created a robust set of user interface guidelines. One guideline was developed to be devoted exclusively to the problem of searching. Although browsing, searching, and data exploration are intimately intertwined, we will restrict our discussion here to the issues of searching. One way to examine the problem of searching is in terms of user types, i.e., the skill set, prior experience, and domain knowledge of one’s users. Our user types range from database administrators, to manufacturing managers, to sales persons, to company employees. Intersected with this diversity is frequency of use, Le., users who make occasional use of an application versus users who make continual use of an application. As a starting point we chose to divide the problem up into two general user types, ‘self-service,’ and ‘professional’ users. Each of these user types is best thought of through some central principles expressed along a continuum. Self-service users are generally defined as occasional users of an application, and possessing basic web experience, i.e., web browsing, reading online, perhaps have made an online purchase. These users can range

PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 46th ANNUAL MEETING -- 2002 Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting --2002

from persons with little to no computing expertise and low domain knowledge (e.g., a shop floor worker filling out hisher benefits online), to someone with a modicum of computing expertise and a modicum of domain knowledge (e.g. a human resources professional filling out hisher benefits online). Professional users are defined as domain experts that routinely use a given application, and also have a medium to high level of computing experience. Professional users can range from persons with a small set of often repeated tasks and moderate computer expertise (e.g. a telephone sales representative taking orders via computer), to a person with a high degree of task variety and computing expertise (e.g. financial analysts, system administrators). Clearly there is a wide range of user profiles in between; this is simply where we chose to begin our design investigation. For self-service users we argue for use of a basic search user interface (UI) (see Figure l), and if present, only a very constrained advanced search UI. A simple search UI is composed of a search text box, a ‘go’ button, and one page of a results table. Occasionally one might add the ability to search on specific attributes or categories of interest via a popdown list. The output is displayed in a tabular format listing the possible matches. For this user type, the results table would have little to no related actions (e.g. duplicate, edit) or filtering available. Part of the search engine logic has been specified to aid this user type, including supporting case insensitivity, misspellings, and multiple string searches. It is important to note that these search engine features are common across all search UIs. The design of this search user experience avoids information overload for a self-service user, aids learnability of the search UI, and aids error prevention.

For professional users, who by definition have more experience with an application and more computing expertise, we can expose a more sophisticated search UI. The input is typically a more complex basic and advanced search UI (see Figure 2). The basic search provides multiple attributes of interest as well as a keyword text box. The advanced search provides additional attributes, and can expose conditional logic (e.g., ‘contains,’ ‘less than,’ ‘greater than’). The search result set can also be more sophisticated allowing search within the existing search results, providing a greater number of possible actions, sorting by columns, as well as viewing updateable (or editable) table results. As part of the search engine logic, again common to all search UIs, we have specified certain rules to support these users, including handling of quotes, wildcards, and Boolean operators. Advanced operators may also be exposed in the UI in the form of tips or hints, such as use of the ‘%’ wildcard. For professional users on the high end, that is those who have a high degree of exposure to an application as well as computing expertise, we have created an additional feature called ‘customizable views’ (see Figure 3). This is a UI mechanism designed to allow a user to create and save a set of query terms (i.e. saved searches), as well as to customize the display of the results (Le. sort order, columns displayed, column order, and number of results displayed on a page). These two general classes of design were arrived at based on input from product teams about their user requirements, as well as usability testing of prototypes with actual end users. We believe this is a solid set of initial designs and have released them to all of our product teams.

629

630

PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 46th ANNUAL MEETING -- 2002 Proceedings of the Human Factors and Ergonomics Society46th Annual Meeting -- 2002

Figure 3. Customizable Views for Advanced Users

BUILDING A SITE-WIDE SEARCH UTILITY: LESSONS LEARNED Joel Angiolillo Distinguished Member of the Technical Staff Verizon Laboratories Waltham, MA (781) 466-2674

[email protected] In 200 1, Verizon launched a completely new web site. Every component of the site was new, from the home page down to the lowliest FAQ. In the process, every utility on the site was redesigned, including site-search (site-search provides the ability to search the contents of the site one is on). The new search utility was based on a VerityTM search engine, but the user interface was completely home grown. This paper is a travel log of our 18-month trip through the design and test of the verizon.com search utility. Verizon.com has over lOOK pages of information. Any single user may be looking for only one of those many pages. In the worse case, users don't know what they are looking for and might not recognize usefil content if they tripped over it. To make looking for information more successful, what sorts of Input, Output, and System features should a search utility have? Input Questions: Should there be a search utility on the site at all? Why? If there is one, where should it be located? On the Home Page? On every page? Should it be a search box or a link to a page that allows for more controlled

searching? Should Boolean features be offered? What types of keywords are the users likely to type? Output Questions: How many matches should be displayed? What information should be displayed with each match? How should the results be sorted? Should synonym matching be used? What should the system do if too many or too few results are returned? System Questions: How should pages be coded and the site organized to support searchers? Why type of process should be in place to assure continuous improvement? How much money should we have to do ongoing testing and development? To answer these and other questions, we employed many of the standard tools in the human factors toolbox. We researched the literature (including studies of the sort presented in this panel), conducted focus groups, built prototypes, ran usability tests, studied log analyses, and wrote requirements. However, in the end, it came down to the weekly Search Team meetings and our working relationship with fellow team members. The art of good design bunks with its cousins, the art of negotiation and compromise. This panel is an appropriate forum to review the lessons learned from this project. If we had to do it all over again, what would we do differently? What are the questions we need the researcher to answer, what are the solutions we need the technologist to build, and, in turn, what can the researcher and technologist learn from the travels of the practitioner?

PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 46th ANNUAL MEETING -- 2002 Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting -- 2002

NEW USER-CENTERED AND COGNITIVE BASED APPROACHES TO SEARCH ENGINE EVALUATION Amanda Spink Associate Professor The Pennsylvania State University University Park, PA [email protected] This presentation will discuss emerging user-centered and cognitive task based approaches to the measurement of search engines effectiveness. Search engines are to be studied from two broad perspectives. Systems-centered evaluation approaches include evaluation measures to determine effectiveness, efficiency, and cost effectiveness, including variations of precision and recall measures. Usercentered evaluation approaches includes usability studies and emerging task-centered approaches that seek to gain a better understanding of (1) userhearch engine system interaction processes, and ( 2 ) how users evaluate their own interactions with search engines - most approaches to search engine evaluation are for researchers not users. Usercentered, cognitive based approaches are exploring the criteria users’ employ to evaluate their own search engine tasks and interactions. Specific task-based studies focus on the user’s information task resolution during user’s information seeking processes. Studies have proposed new, more user-centered search engine evaluation measures, including Reid’s (2000) task based approach, Tague and Schultz’s (1 989) informativeness measure, and Greisdorf and Spink’s (200 1) median measure. This presentation discusses Spink’s (2002) information problem shift measure that assesses the impact of search engine interactions on a user’s: (1) information problendtask resolution, ( 2 ) progress to a more advanced information seeking stage, and (3) increase in personal knowledge of the information tasklproblem. Research shows that different search engine users experience different levels of shimchange in their information probledtask, information seeking stage, and personal knowledge level of the information problendtask. Precision does not always correlate with other cognitive shifts. Implications for the development of user-centered cognitive approaches to search engine evaluation and further research are discussed. REFERENCES

Aurelio D.N. (1999). Exploratory usability test of four Web search engines. Unpublished research report. Northeastern University: Boston, MA. Bandos J. and Resnick M.L. (2002). Understanding query formation in the use of Internet search engines. Proceedings of the Human Factors and Ergonomics Society 461hAnnual Meeting. Human Factors and Ergonomics Society: Santa

Monica, CA. Chen H. and Dumais, S. (2000). Bringing order to the web: automatically categorizing search results. Conference on

Human Factors in Computing Systems - Proceedings 2000.

Association of Computing Machinery: New York, NY.

Fang X. and Salvendy G. (2001). Keyword comparison: a user centered feature for improving web search tools. International Journal of Human-Computer Studies, 52, 9 15-931. Gauch S. and Wang J. (1997). A corpus analysis approach for automatic query extension. Proceedings of the 6IhInternational

. Conference on Information and Knowledge Management.

Association of Computing Machinery: New York, NY. Greisdorf H. and Spink A. (2001). Median measure: An approach to IR systems evaluation. Information Processing and Management, 37,6,843-857. Jansen B.J. and Pooch U. (2000). Web user studies: A review and framework for future work. Journal of the American Society of Information Science and Technology. 52, 3, 235-246. Lergier R. and Resnick M.L. (2001). Task Based Analysis of Internet Search Output Fields. Usability Evaluation and Interface Design Volume I . M.J. Smith, G. Salvendy, D. Harris, and R.J. Koubek (eds). Lawrence Erlbaum Associates: Mahwah, NJ. Light J. (1997). A distributed, graphical, topic-oriented document search system. Proceedings of the 6IhInternational Conference on Information and Knowledge Management. Association of Computing Machinery: New York, NY. Lundquist C., Grossman D.A., and Friedler 0. (1997). Improving relevance feedback in the vector space model. Proceedings of the 6IhInternational Conference on Information and Knowledge Management. Association of Computing Machinery: New York,

NY. McEvoy C. (2001). Letters from readers. UITips, 12/4/01, 3-4. Monaghan M.L. and Andre A.D. (2000). Evaluating the transparency of web search engines. Proceedings of the 2000 HFES/IEA Congress, Human Factors and Ergonomics Society: Santa Monica, CA. Proper E. and van der Weide T. (2001). Information coverage: incrementally satisfying a searcher’s information need. Proceedings of the Universal Access in HCI Conference. Lawrence Erlbaum & Associates: Mahwah, NJ.

Reid J. (2000). A task oriented non-interactive evaluation methodology for information retrieval systems. Information Retrieval, 2, 115-129. Resnick M.L., Maldonado, C.A., Santos J.M., and Lergier R. Modeling On-line Search Behavior Using Alternative Output Structures. Proceedings of the Human Factors and Ergonomics Society 45th Annual Conference, Human Factors and Ergonomics Society, 2001. Silverman B.G. (2001). Implications of buyer decision theory for design of e-commerce websites. International Journal of Human-Computer Studies, 55, 8 15-844. Spink A. (2002). A user centered approach to the evaluation of Web search engines: An exploratory study. Information Processing and Management, 38,3,401-426. Stempfhuber M. (2001). Adaptable and intelligent user interfaces to heterogeneous information. Proceedings of the Universal Access in HCI Conference. Lawrence Erlbaum & Associates: Mahwah, NJ. Sullivan, D. (2000). NPD Search and Portal Site Study. Retrieved on OW1 3/00 at http://www.searchengine watch.com/reports/npd.html. Tague J. and Schultz R. (1989). Evaluation of the user interface in an ’ information retrieval system: A model. Information Processing & Management, 25(4), 377389. Woodruff A. , Faulring A., Rosenholtz R., Morrison J. and Pirolli P. (2001). Using thumbnails to search the web. Proceedings of SIGCHI ’01. Association of Computing Machinery: New York, NY.

631