A Specialized Search Assistant for Learning ... - ACM Digital Library

0 downloads 0 Views 1MB Size Report
Additional Key Words and Phrases: Bilingual search, search assistant, Web ..... (4) Do teachers perceive Google to be a useful tool for finding LOs on the Web?
A Specialized Search Assistant for Learning Objects CECILIA CURLANGO-ROSAS, Universidad Aut´onoma de Baja California GREGORIO A. PONCE, San Diego State University GABRIEL A. LOPEZ-MORTEO, Universidad Aut´onoma de Baja California

The Web holds a great quantity of material that can be used to enhance classroom instruction. However, it is not easy to retrieve this material with the search engines currently available. This study produced a specialized search assistant based on Google that significantly increases the number of instances in which teachers find the desired learning objects as compared to using this popular public search engine directly. Success in finding learning objects by study participants went from 80% using Google alone to 96% when using our search assistant in one scenario and, in another scenario, from a 40% success rate with Google alone to 66% with our assistant. This specialized search assistant implements features such as bilingual search and term suggestion which were requested by teacher participants to help improve their searches. Study participants evaluated the specialized search assistant and found it significantly easier to use and more useful than the popular search engine for the purpose of finding learning objects. Categories and Subject Descriptors: H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms: Human Factors, Experimentation Additional Key Words and Phrases: Bilingual search, search assistant, Web search interfaces, WWW ACM Reference Format: Curlango-Rosas, C., Ponce, G. A., and Lopez-Morteo, G. A. 2011. A specialized search assistant for learning objects. ACM Trans. Web 5, 4, Article 21 (October 2011), 29 pages. DOI = 10.1145/2019643.2019648 http://doi.acm.org/10.1145/2019643.2019648

1. INTRODUCTION

The Web houses a veritable ocean of content: some of which can be used for instructional purposes, typically referred to as a learning object (LO). Teachers rely on search engines to help them look for LOs online. However, finding information on the Web is difficult for many users. First, users have difficulty choosing the correct words for stating their information needs in the form of queries [Belkin 2000]. They write short queries which have, on average, 2.21 terms each. They also make few queries, on average 1.6 per search session. They examine few search results, on average 2.35 pages containing 10 results each, with 58% of users accessing only results on the first page

This work was supported by the Universidad Aut´onoma de Baja California, particularly its School of Engineering at the Mexicali Campus that made this work possible. The study reported in this article comes from the work in process by C. Curlango-Rosas in partial fulfillment of her doctoral dissertation program at the Universidad Aut´onoma de Baja California. Authors’ addresses: C. Curlango-Rosas and G. A. Lopez-Morteo, College of Engineering, Universidad Aut´onoma de Baja California, Blvd. Benito Juarez s/n, Mexicali, Mexico 21280; email: [email protected]; G. A. Ponce, San Diego State University, Imperial Valley Campus, 720 Heber Avenue, Calexico, CA 92231. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from the Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or [email protected]. c 2011 ACM 1559-1131/2011/10-ART21 $10.00  DOI 10.1145/2019643.2019648 http://doi.acm.org/10.1145/2019643.2019648

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21

21:2

C. Curlango-Rosas et al.

[Jansen et al. 2000]. In addition, users have trouble evaluating search results and deciding what sources and information to select [Walraven et al. 2009]. It has been observed that there is some difficulty in finding LOs [Downes 2004; Seyedarabi 2006]. Part of the problem could stem from the fact that there is disagreement as to what exactly constitutes an LO. There are multiple definitions for LO, some so broad they consider an LO to be anything at all. For example, Friesen [2001] and Mortimer [2002] consider LOs to be anything and should not be limited to the digital world; this implies that LOs include physical objects, like books and models. Others like Wiley [1999], consider LOs to be any digital entity even when they have no educational purpose. On the other hand, Doorten et al. [2004] and Quinn and Hobbs [2000] consider that LOs must have an educational purpose. Still others like Dunning [2002], Koper [2003], Sosteric and Hesemeier [2004], and Polsani [2004], consider LOs to be those digital objects that have a more formal educational purpose. Yet another distinction is made such that LOs are only those digital objects that have been marked in a specific manner for educational purposes [Cisco-Systems 2001; Koper 2001; Koper and van Es 2004; Learning Alberta 2002; Rehak and Mason 2003; Sloep 2004; Wieseler 1999]. Unfortunately, the diversity of definitions has given rise to many terms which are used when referring to LOs such as learning resource, component, content object, media object, and reusable learning object among others [McGreal 2004]. For instance, when searching for LOs many terms and descriptions used are: knowledge objects, educational objects, knowledge chunks, digital objects, digital educational computer programs, and Flash-exercises [Nash 2005]. Therefore, a user can not just type the terms learning object along with the topic and expect to obtain relevant results. In spite of the myriad of definitions and discussions regarding what is and is not an LO, the majority of discussions on LOs seem to accept the definition proposed by the Learning Technology Standards Committee (LTSC) in its proposed standard [IEEE 2002] where they define an LO as “any entity, digital or nondigital, that may be used for learning, education or training”. A recent contribution is found in Churchill [2007], where he proposes not only a definition for LO: “a learning object is a representation designed to afford uses in different educational contexts” but also a classification for LOs into six categories: presentation, practice, simulation, conceptual models, information and contextual representation objects. Another reason LOs can be difficult to find is that they are stored in many places. Some LOs are stored in repositories some of which can contain LOs and metadata while others contain only metadata. Some repositories provide a Web based interface, a search mechanism and a listing of the categories that the LOs belong to Downes [2004]. Others function like a database and are part of other products like learning management systems (LMS), which in effect hide the LOs [Morales et al. 2009]. To clarify, repositories can be centralized or distributed. Centralized repositories are more common and store the LO metadata on one server or site while the LOs are stored on other servers distributed on the Internet. Distributed repositories usually use the peer-to-peer architecture that allows servers to communicate with each other. Repositories allow users to search for LOs stored in their databases and provide support for simple and advanced queries. Advanced queries permit the user to specify criteria based on the metadata fields. In addition to search, some repositories also allow browsing of their collections [Neven and Duval 2002]. The current trend is building federations of national and regional repositories. There is also the Global Learning Object Brokered Exchange (Globe) which includes repositories such as ARIADNE (Europe), Merlot (USA), EdNA Online (Australia), LACLO (South America), LORNET (Canada) and NIME (Japan) among others; Neven and Duval [2002] provide a description of these repositories. Members of the different repositories can access LOs in the other repositories [Morales et al. 2009] through federated searches. A ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:3

federated search retrieves information that cannot be accessed using conventional search engines like Google or AltaVista [Si and Callan 2005]; an example would be information that is stored on databases. Another problem with LO repositories is that they have heterogeneous interfaces that are not user friendly [Downes 2004]. Finally, some repositories can be difficult to search, while others are not available to the public. Other educational material that can be considered LOs is scattered throughout the Web on personal Web pages, on unspecialized sites that provide material and assistance that is useful to teachers, and on sites that while not having an educational purpose nonetheless provide content that can be of educational value. Nash [2005] identified Wikis, serious games, blogs and Podcasts as Web content being used as LOs. These types of sites however lack a formal method for their classification as educational content and as well as one for easy retrieval. In addition most of these types of materials are not annotated with useful metadata to facilitate their search [Thompson et al. 2003]. LOs that were created for teaching and use in an educational context target a particular audience. For example, a computer science professor creates a simulation of the execution of a search algorithm to present to his class, places it on his university website and provides his students with its location. Outside the university, this type of LO is going to be difficult to find because outsiders, both professors and students, must rely on public search engines to add these LOs to their indexes in order to make them accessible via a query. In order to find LOs on the Web, outside of repositories, users rely on public Web search engines. This causes problems because search results can contain LOs mixed in with other content, that though irrelevant to the educational content being sought, may be considered by the search engine to be more relevant and thus positioned higher on the list of search results. For example, Hassan and Mihalcea [2009] found that using a major search engine to retrieve educational material using the query tree data structure yielded only four results that were highly educative within the top 50 documents. By placing material that is irrelevant for the educational topic ahead of the relevant material, the LOs become hidden or not easily discoverable within the public domain. Another source of difficulty for finding LOs stems from user interfaces provided by the search tools available to users. When the user begins a search using the major search engines (such as Google,1 Yahoo!2 ) the user is typically presented with a single textbox in which to form the query for the desired LO. There is nothing to guide or assist the user to indicate how much, for example, number of terms to use, or what kind, for example, pdf, doc, etc. of information, must be written in order to adequately describe the LO. It has been noted that users who use advanced search syntax in their queries are consistently more successful in their searches [White and Morris 2007]. However when attempting to use the advanced search interface, the user can be unclear about how to effectively use the available options. In addition, the advanced search interface is replaced by a listing of the search results once the search is performed. This makes it difficult for the user to remember what options were used to describe the LO. In order to access his original advanced query, the user must use the browser’s back button or select the link Advanced search on the results page. However, doing so means that the search results are no longer visible. To address the lack of support teachers encounter when using public search engines to locate LOs, we propose the use of a specialized search assistant. One of the issues this search assistant must deal with is how to obtain a better description of the LO 1 http://www.google.com 2

http://www.yahoo.com

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:4

C. Curlango-Rosas et al.

being sought, without adding to the user’s cognitive load. Also, the search assistant must leverage the description of the LO in order to come up with queries that yield search results that are precise enough so the required LO is listed among the first 10 results. In this spirit, we developed the specialized search assistant, which we call the Learning Object Search Tool Enhancer3 (LOBSTER). LOBSTER provides assistance to the searcher by presenting a user-friendly interface that guides the user when describing the LO and provides other helpful features such as automatic translation of terms, term suggestion, and clustering of search results. Thus LOBSTER becomes a type of gateway for finding LOs between the searcher and the search engine, which in this case is Google. We will (a) describe other approaches to finding LOs that have been reported in the literature, (b) describe the methodology and rationale we used to develop and test LOBSTER, (c) explain how we designed LOBSTER and the considerations we took into account in its design, (d) report the results we obtained when we had university faculty use and evaluate LOBSTER, (e) discuss our findings, and (f ) present our conclusions. 2. RELATED WORK

There have been several efforts geared towards helping users find LOs. The Instructional Architect is a tool that helps teachers find, annotate and use LOs in digital libraries. Although it works primarily with the National Science Digital Library,4 it also allows teachers to integrate material from the Web into their lessons [Recker et al. 2005]. Farrell et al. [2004] developed a search engine for LOs as part of a learning system for IBM employees. Users of the system specify the topic, amount of time they have available to learn and the depth with which they require the topic to be covered. The system searches a collection of 500 LOs that were classified by a group for 27 experts and recommends LOs that meet the searcher’s criteria. Instead of developing their own search engine, others have chosen to use ready made search engines, such is the case of the Norwegian Ministry of Education and Research who used the VerityK2 ˚ et al. 2003]. search engine for their LO portal [Skar Another route that has been explored for finding LOs has been personalization. Broisin and Vidal [2006] proposed tracking users as they use a LMS and using this information to provide a personalized search tool that makes recommendations of LOs to the user without them having to fill out a form. Keleberda et al. [2006] also suggested using personalized search technology, in this case to help learners select LOs according to their preferences. Ochoa and Duval [2006] propose searching for LOs using contextualized attention metadata which is information regarding the identity of a user, the actions he performs, the tools he uses and the communities he belongs to. In previous works, no information was provided regarding their acceptance by users. However, in general, Web search personalization has not had a major impact yet due to (a) difficulties in making predictions, (b) problems related to user privacy and (c) the apparent contradiction between users needing control of systems while at the same time wanting the systems to be unobtrusive [Hearst 2009, p. 232]. Instead of focusing on individuals, another approach to personalizing Web search is the use of groups. Teevan et al. [2009] demonstrated that using a combination of personal and group content improved personalization especially for groups that had a common task, occupation or interest. For finding LOs on the Web there have been few works. Google provides a service to locate LOs for computer science called Google Code University.5 According to the 3 http://lobster.mxl.uabc.mx 4 http://www.nsdl.org 5

http://code.google.com/edu

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:5

site, one can find class presentations, readings, exercises and projects. When searches are performed in the section entitled CS Curriculum Search, results are shown in the following categories: Lectures, Assignments, Papers, and Videos. Seyedarabi [2006] describes the iClass project which contains the PoSTech (Personalized Search Tool for Teachers) search tool that allows teachers to search for LOs by selecting several criteria from dropboxes. Finally, Thompson et al. [2003] propose developing tools that require little or no human intervention to insert metadata into LOs that are published on the Web. Also relevant to the work we present are other efforts geared towards assisting users with Web search, such as query expansion and query suggestion techniques. Interactive query expansion is a way to support users when formulating a query. A problem with query expansion is that even though users indicate they are interested in having terms suggested during their searches, it has been found that they do not use this feature when it is provided. When they do try to use it, they have problems making good decisions regarding the terms and are at times reluctant to select terms when they do not understand why they were suggested or where they came from Kelly et al. [2005]. Currently many Web search engines offer term suggestions for queries. After a search, Google, Yahoo! and Bing users receive a list of search results and related queries however no additional information is shown regarding the suggestions [Kelly et al. 2010]. Assistance with query formulation is important for supporting users when they are searching for topics they are not very familiar with Kelly et al. [2009]. In addition as indicated in Hearst [2009, p. 149], term suggestion may also be favorable for specialized and technical situations and users. Other work has focused on how search results are presented to users. Dumais et al. [2001] evaluated several user interfaces that involved displaying search results grouped in categories versus the traditional list view of results and found that when using category views participants took less time to complete the search tasks. In addition, participants preferred the category views more than the list views. In another ¨ [2005b] determined that an appropriate number of categories to show study, Kaki ¨ [2005a] found that using categories reduced the users is between 10 and 20. Kaki number of cases in which users found zero relevant results. Missing from the literature we reviewed were efforts to assist searchers throughout the entire search process. The efforts we reviewed focused either on assistance during query specification such as Farrell et al. [2004], Kelly et al. [2009, 2010], and ¨ [2005a, 2005b] and Dumais Teevan et al. [2009]; or results presentation such as Kaki et al. [2001]. We took a more holistic approach by integrating assistance throughout the entire search process from query specification, to results presentation, and query modification. 3. METHODOLOGY

We conducted a two-part study in order to collect information about how teachers currently look for LOs and evaluate the effectiveness, usefulness and ease of use of a specialized search tool. 3.1 Research Questions

In the first part of the study we were interested in answering the following research questions. (1) (2) (3) (4)

What features do teachers feel would help them find LOs more easily? Are teachers able to find LOs easily on the Web using Google? Do teachers perceive Google to be an easy to use tool for finding LOs on the Web? Do teachers perceive Google to be a useful tool for finding LOs on the Web? ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:6

C. Curlango-Rosas et al.

Finding answers to the following questions motivated the second part of the study. (5) Are teachers successful in finding LOs using a specialized search assistant? (6) Is the specialized search assistant perceived to be easy to use for finding LOs? (7) Is the specialized search assistant perceived to be useful for finding LOs? 3.2 Participants

The sample population consisted of 30 professors from a university in northern Mexico. Participants were chosen at random from among the computer science and computer engineering faculty. The intent of working with faculty from computer related disciplines was to control the variable of computer literacy as a factor for the study, allowing us to focus on the strategies they used to search for LOs. We wanted to measure how successful professors were when searching for LOs within their knowledge domain. It has been demonstrated that searcher’s familiarity with a search topic affects query length [Belkin et al. 2003] and verbal creativity and flexibility when writing query terms [H¨olscher and Strube 2000]. Unlike other studies, the logs we used were not anonymous so we are able to track all the actions that users made and talk to them about what they did. Finally, the way we conducted our study allowed us to know when participant queries were successful unlike log-based studies where this crucial information is impossible to obtain. This is important because analyzing the search process from the perspective of both successful and unsuccessful searchers allowed us to identify strategies that lead to finding LOs as well as barriers which hinder search. 3.3 Procedure Part I. The first part of the study had the objective of collecting data to understand how teachers search for LOs in order to obtain a sense of the requirements that a specialized search assistant should fulfill. Thus, the study began by holding individual sessions with participants. In the first part of the session, participants were first read the following definitions of LOs obtained from different sources such as industry standards and papers published in academic journals.

(1) A learning object is defined as any entity, digital or non-digital, that may be used for learning, education or training. [IEEE 2002]. (2) A learning object is any digital resource that can be reused to support learning. [Wiley 2001]. (3) A learning object is a digital piece of learning material that addresses a clearly identifiable topic or learning outcome and has the potential to be reused in different contexts. [Weller et al. 2003]. The reason for providing definitions to participants was to ensure that they understood the term LO and had definitions in common they could all reference. For the first search task participants were presented with this scenario: Imagine you teach an introductory programming class, in any programming language you choose. For this class, you wish to use a LO to exemplify the structure of the simplest program you can write in the language you chose. Next, they were asked to describe the LO they would need to teach under this scenario. By asking participants to describe the LO they were looking for we were able to verify whether they did in fact find what they were looking for. Participants were informed that they could search for the LO until they found it or until they gave up. We chose this LO because it is relatively easy to find beginner level resources on a ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:7

variety of programming languages. This would allow us to gather data regarding how participants search for LOs that are abundant on the Web and relatively easy to find. For the second task, participants were asked to search for: an LO that visually and dynamically, through animation, shows how the bubble sort method works. The LO must show the code and its execution as an array is being sorted. That is, it must indicate which line is being executed and how it modifies the array. This type of LO was chosen because it is not solely a straight text object, and the animation requirement makes finding this type of object harder than finding an object such as a textual explanation. In this way, we were able to gather data regarding the search strategies participants use when searching for LOs that are harder to find. Participants were again informed that there was no time restriction for their searches. The only restriction imposed on participants during both tasks was to search for the LOs using Google. Since Mostafa [2005] points out Googling has become synonymous with research and it has been found that students, faculty and researchers use Google overwhelmingly as their search engine [Rieger 2009], it was likely that they were familiar with using Google. We were able to confirm this preference with the exit questionnaire. All actions on the computer, were recorded using Camtasia,6 a video capture software. Audio recordings of the sessions were also collected. At the conclusion of the searches, participants answered a questionnaire. One section of the questionnaire was used to obtain demographic data. Another section was oriented towards obtaining data regarding their searches for LOs. One particular question in this section asked participants to list the features they felt were lacking in the search engine they use that would help them do a better job of finding LOs. The final part of the questionnaire was an adaptation of the Technology Acceptance Model (TAM) developed by Davis [1989] to explain how users come to accept and use a technology. It has been suggested that the TAM is a cost-effective tool for predicting user acceptance of systems [Morris et al. 1997]. In addition, the TAM has been used in several studies to understand attitudes towards search engines [Liaw and Huang 2003; Capra et al. 2007]. This instrument measures users’ perception as to the ease of use and usefulness of software. Ease of use is defined as the degree to which a person believes that using a particular system would be free of effort. Usefulness is defined as the degree to which a person believes that using a particular system would enhance his or her job performance. These two measures are important because they were correlated with intention to use [Davis 1989]. Participants were asked to express how useful and easy to use they considered Google for searching for LOs. The TAM questionnaire was translated to Spanish from the original English. Space was provided in each item for optional comments. Table I shows a translation from Spanish of the adapted TAM statements participants considered. For this part of the study, instead of the word TOOL, the word Google was shown. Statements 1-6 measure participants’ perceptions of Google’s ease of use for searching for LOs. Statements 7-12 measure participants’ perceptions of Google’s usefulness for searching for LOs. Participants were presented the statements in Table I in random order and asked to indicate how they felt about each statement using a 7-point Likert scale with values ranging from (1) completely disagree to (7) completely agree. Part II. By design, the second part of the study took place approximately 6 months after the first one. This time period afforded us the opportunity to extract, process and analyze the data from the first part of the study as well as use this data to design 6

http://www.techsmith.com/camtasia.asp

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:8

C. Curlango-Rosas et al. Table I. Modified TAM Questionnaire Statements Used to Measure Participants’ Perceptions of Ease of Use and Usefulness Statement S1 S2 S3 S4 S5 S6

Learning to use TOOL to find LOs is easy for me. I would find it easy to get TOOL to do what I want it to do. My interaction with TOOL is clear and understandable I find TOOL easy to use to look for LOs. I find TOOL flexible to interact with. It is easy for me to become skillful at using TOOL to look for LOs.

S7 S8 S9 S10 S11 S12

Using TOOL to look for LOs would enable me to accomplish tasks more quickly. Using TOOL to look for LOs would improve my teaching. Using TOOL to look for LOs in my job would improve my productivity. Using TOOL to look for LOs in my job would enhance my effectiveness. Using TOOL to look for LOs would make it easier to do my job. I find TOOL useful in my job.

the search tool we call LOBSTER. During this 6 month time period, we also built the search tool and tested it with several users who were not participants in the study. The objective of the second part of the study was to evaluate LOBSTER. One of the aspects we evaluated was whether LOBSTER helped more participants find the LOs they were searching for. A second aspect was whether users found LOs in less time with LOBSTER. The third aspect was whether users viewed LOBSTER as a useful and easy to use tool for searching. All the subjects who participated in the first part of the study were again contacted for this second part, with individual sessions scheduled for each participant. In these sessions, participants received a brief hands-on tutorial on how to search with LOBSTER. Following this, participants were presented with the same scenario from the first session and asked to provide a detailed description of a LO they would use for the situation. They then searched for it. Once the first search concluded, participants were given a detailed description of the same LO used for the second search in the first part of the study and asked to find it on the Web. Participants were restricted to using LOBSTER in both searches and informed again that they could search until they found the LO or gave up. All actions on the computer were recorded using Camtasia and audio recordings were also collected. After the second search was concluded, participants answered a questionnaire. The questionnaire only contained questions from the TAM questionnaire shown in Table I. Space was provided in each item for optional comments. For this part of the study, instead of the word TOOL, the word LOBSTER was shown. Again the statements were presented in random order to participants and they used a 7-point Likert scale with values ranging from (1) completely disagree to (7) completely agree to indicate how they felt regarding each statement. Upon completion of participants’ search sessions, all session recordings were reviewed and coded using the coding scheme proposed in Hargittai [2004]. This coding scheme involved assigning a predefined code to each action taken by the user in order to arrive at a Web site as well as recording the uniform resource locator (URL) of the site. In this way it was possible to trace a user’s path during a search session. The coding scheme also provided codes for recording what a user did when visiting a particular page, such as whether the browser windows were maximized, minimized, or resized. All coding data was recorded on a spreadsheet for analysis. The spreadsheet contained the columns shown in Tables II and III. The first column of Table II labeled Action, lists the coded actions users took during a session. The meaning of these codes is shown in Table IV. The second column of Table II labeled URL shows the URL of the Web site ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

www.google.com.mx/search www.it.uniovi.es/docencia/GestionGijon/redes/Rede-Practica1-1.pdf www.google.com.mx/search

47

510

20

5

www.google.com.mx/search

20

2

www.abcdatos.com/tutoriales/programacion/java/java/principiantes.html

510

9

www.google.com.mx/search

20

1

studies.ac.upc.edu/EPSC/FSD/FSD-Practica1.pdf

510

6

www.google.com.mx/search

92

1

www.google.com.mx/sorry

303

Time

00:05:28.19

00:04:47.16

00:04:33.09

00:04:14.01

00:03:54.06

00:03:37.28

00:02:01.25

00:01:45.24

00:01:16.02

00:01:09.01

00:00:48.02

00:00:44.25

Description of Action Types URL in location bar Back button once Major search engine searc Google Error Search engine ”More results/next 20” link Google results link Automatic redirect

10 20 30 303 47 510 92

Success

Code

Introducci´on al lenguaje Java

Table IV. Partial List of Codes

Tutoriales Programaci´on Java: Java: Principiantes ABCdatos

Introducci´on al lenguaje Java

2

1

1

1

1

1

1

1

1

1

1

1

1

0

0

1

2

0

1

3

1

0

0

0

0

Scroll

Page did not finish loading

598,000

Results Found

introduccion al lenguaje Java

Search Query

Types in requested code

Comments

Task

Table III. Example of Search Activity Coding, Part II

www.google.com.mx/search

30

Link Text

www.google.com.mx

92

Link Position

www.google.com

10

Page Number

URL

Action

Table II. Example of Search Activity Coding, Part I

A Specialized Search Assistant for Learning Objects 21:9

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:10

C. Curlango-Rosas et al.

that was visited. The third column shows the time the action was taken. This time is with respect to the start of the search session. The column labeled Task is used to record the search task that the action corresponds to. The Scroll column is used to register the scrolling activities that were performed at each Web site. The terms written to perform a search are recorded in the Search Query column. Columns Page Number, Link Position, and Link Text shown in Table III are used to record which search results the user followed. Column Success is used to record whether the user found what he was looking for or not. Observations regarding the action are recorded in the Comments column. Finally, for each query, the number of search results reported by the search engine is recorded in column Results Found. These last two columns were added for this study. As previouosly mentioned, the coding methodology used makes it possible to trace users’ navigation on the Web. Tables II and III show an excerpt of a search session, we can see that the participant began by typing the URL of Google (action 30). Approximately, 4 seconds later, the participant was automatically redirected to Google in Mexico (action 92). The participant then typed introduccion al lenguaje Java (introduction to the Java language) and performed a search on Google (action 30). Google reported 598,000 results in response to the query, but first Google responded with an error requiring further input from the user (action 303). After the required user input, the participant was redirected to the Google search results (action 92). After scrolling down the results (scroll 1), the participant selected the sixth result from the first page (action 510) which was a link with the text Introducci´on al lenguaje Java (Introduction to the Java language). The participant inspected the Web site, scrolling down, up, then back down (scroll 3). Next, the participant clicked on back (action 20), scrolled further down the results listing (scroll 1), and selected the link Tutoriales Programaci´on Java: Java: Principiantes ABCdatos (Programming Tutorials Java: Java: Beginners ABCdatos) (action 510). This was the ninth result from the first page. After briefly perusing the page, the participant again clicked on back (action 20), then requested the second search results page (action 47). The participant selected the fifth result link Introducci´on al lenguaje Java (Introduction to the Java language) (action 510). However, this page did not finish loading and the participant clicked on back (action 20). With this short 5-minute excerpt (the user continued searching for 13 more minutes), we show the extent to which we can recreate search sessions. By coding search sessions, we are able to gather valuable data with which to characterize how teachers search for LOs on the Web. 4. RESULTS

In this section, we present results from both parts of our study. The demographic data we collected indicated that our sample population consisted of 21 females and 9 males, exactly half were between the ages of 25 and 35 years old, and the remaining half were between 36 and 45 years old. The questionnaire collected information regarding participants’ search habits, such as, frequency of their searches, the duration of searches, etc. One of the questions was: How many times per week do you look for LOs during a typical semester? Participants selected the frequency of their searches from a fixed collection of answers. Their responses are shown in Table V. Another question was, How many hours per week do you look for LOs during a typical semester? The choices given and the number of times each response was selected are shown in Table VI. Because LOs can be found in a variety of places besides the Web, we asked: How often do you believe you look for LOs on the Internet? Participants’ responses are shown in Table VII. To learn about how skillful they considered themselves to be in terms of finding LOs on the Web, we ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:11

Table V. Responses to Question: How Many Times Per Week Do You Look for LOs During a Typical Semester? Response

Participants

None Once a week 2 or 3 times per week Several times per week Once a day Several times per day

3% (1) 17% (5) 20% (6) 33% (10) 7% (2) 20% (6)

Table VI. Responses to Question: How Many Hours Per Week Do You Look for LOs During a Typical Semester? Response less than 1 hour between 1-5 hours between 5-10 hours between 10-15 hours between 15-20 hours between 20-25 hours more than 25 hours

Participants 0% (0) 47% (14) 33% (10) 10% (3) 7% (2) 0% (0) 3% (1)

Table VII. Responses to Question: How Often Do You Believe You Look for LOs on the Internet? Response Never Rarely Sometimes Frequently Almost always

Participants 0% (0) 7% (2) 37% (11) 43% (13) 13% (4)

posed the question: When you look for LOs, how often do you find exactly what you are looking for? Responses are shown in Table VIII. Participants answers indicate that they spend between 1 and 10 hours, distributed in several sessions per week throughout the semester, searching for LOs. They are split as to how successful they consider their searches. Almost half (14) of them consider that they frequently or almost always find what they look for. While 14 others qualify their success rate as sometimes successful. Finally, participants were asked to provide a list of features they believed were missing from Google and should be included in order to make their searches with Google more fruitful. We report some of the answers we received and how we used them in the design of the search assistant next. 4.1 Requirements for the Search Assistant

The questionnaire that participants answered at the end of their searches with Google included the open question: What characteristics should the search engine you normally use, have in order to help you find Learning Objects? The answers provided by participants gave some insight into how to better assist them to find LOs and helped us ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:12

C. Curlango-Rosas et al. Table VIII. Responses to Question: When You Look for LOs, How Often Do You Find Exactly What You Are Looking For? Response

Participants

Almost always Frequently Sometimes Rarely

10% (3) 37% (11) 47% (14) 7% (2)

Table IX. Search Engine Characteristics Suggested by Participants Comment 1. An option to show text and images. 2. Buttons for video, images, audio and general Web pages. 3. A section for advanced searches. 4. It should have search filters. 5. Help find results that come only from educational institutions. 6. Filter by type of content. 7. Show suggestions. 8. Show results obtained by others who have made similar searches (like Amazon suggestions) and assign some type of ranking to see if they have been useful. 9. It should sort the pages it finds using various criteria. 10. Information sources should be better classified. 11. Classify the subjects into presentations, text explanations, graphical applications, and if you ask for learning objects it should only show these. 12. A classification depending on the area and particular topic. 13. It should identify the type of object it is for example, video, audio, presentations, examples, images. 14. Make searches by language easier. 15. Partial search, translator, options to search by page origin and page source languages. 16. Specify preferred search language. 17. Provide a translator.

answer research question 1. Table IX shows some of the responses participants gave. Comments 1-3 indicate that subjects require assistance for finding multiple types of LOs. However these comments also highlight participants’ lack of awareness as to the search options and functionality offered by popular search engines, such as image and video search and advanced search. In comments 4-6, participants indicate they require support for filtering or eliminating results that are unrelated to LO search. Participants also mention they require support through search suggestions, as shown in comments 7-8. In addition to suggestions, they expressed a desire to know about what others are doing when they search. With comments 9-13, participants highlight their need for better results classification. Finally, comments 14-17 show a need for support for searching in languages other than their native language. Even though mayor search engines already provide many of the functionalities that participants indicate they require to improve their LO searches, it is clear from their comments that participants are not aware of them. Furthermore, as evidenced by participants’ actions during the search sessions in the first part of the study, few participants use advanced search and when they do use it, they do so incorrectly. For the Google search sessions, only 6 out of the 30 participants used advanced search in the ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:13

Fig. 1. LOBSTER initial screen.

first search task. In the second task, 12 of the participants used advanced search. But of these, 5 used it incorrectly in at least one query. Other subjects used Google’s Advance Search user interface, but the resulting queries contained none of the advanced search syntax elements. Therefore, in order to provide support for LO searches, these functionalities should be provided in such a way that their use is either obvious or easy for users. We also compared participants’ queries and studied those queries which had led to successful searches and those which had not. We found that both types of queries had similar terms but that the unsuccessful queries were missing some important terms. From this, we obtained a general model of a query for an LO. We implemented the components of this model in the user interface through the various components of LOBSTER’s user interface. In addition, we observed that participants rarely used Google’s syntax for expressing the format of the file they were looking for and instead relied on natural language. For example, some queries had the terms interactive demonstration instead of filetype:swf. This indicated to us that our search assistant had to provide a mechanism through which users could express file formats in a natural way and that should then be transformed into the syntax that the search engine recognized. 4.2 Description of LOBSTER’s User Interface

In designing LOBSTER, we were mindful of the need to provide users with an interface that would be easy to use and useful so that they would be drawn to use LOBSTER. For these reasons, we selected user interface components, such as textboxes, check boxes and tabbed panels, that most Web users are already familiar with when surfing the Web. In addition, we strived to elicit a thorough description of the LOs without forcing users to fill out lengthy forms. Finding LOs on the Web is a process that requires not only structuring an appropriate query, but also navigating through sometimes thousands of search results. Thus, we provided users with an organized, easily navigable presentation of search results. When the user first navigates to LOBSTER, he sees the interface shown in Figure 1. Instead of a single textbox, like most search engines, LOBSTER has 5 components, which help guide the user to better describe the LO. To begin, a description of the LO’s main topic is requested by LOBSTER in Main topic of LO and Programming language. The presence of a textbox for Programming language is due to the fact that this current version of LOBSTER is meant for use by professors from the computer field during the study. Then, the user selects any or all file formats for the LO in the LO formats component of LOBSTER. Another choice the user makes through the interface is the search language which allows the user to simultaneously select the language for the query as well as the language for the LO. This is done with the LO languages component. Finally, the user completes the description of the LO in the last component, Additional terms. This part of the description of the LO is where the user indicates how the LO should be structured, for example, as an example, or, an explanation, etc. Finally, when the ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:14

C. Curlango-Rosas et al.

Fig. 2. Example of LOBSTER’s search results for an LO in video format.

user presses the Search for Learning Object button, LOBSTER creates several queries based on the description of the LO. 4.3 Implementation of Search Assistant

In this section, we describe Learning Object Search Tool Enhancer (LOBSTER). We describe several of the elements of the user interface and indicate how these elements are related to the requests for functionality that participants made. To address the requirement for filters and facilitating the search for multiple types of LOs, extracted from comments such as 1-6 and 9-13 in Table IX, we included the use of filters during query specification and in the search results. At query specification, the user selects from amongst the choices in the LO formats component for the type of LO he is looking for on the Web. Each of the available choices enhances the query by filtering out unwanted types of LOs. When the search results are presented, they are clustered according to the type of LO. There is support in the literature for organizing search results in this manner. For example, Chen and Dumais [2000] found that their study subjects liked a category interface much better than the list interface, and were faster at finding information that was organized into categories. In this way, when the user wants, for example, an LO in the form of a video, the user selects the tab marked Video and obtains the filtered results as shown in Figure 2. Another feature we implemented in LOBSTER, in fulfillment of one of the requirements, was term suggestion. Term suggestion can be a way to address the fact that Web users employ few terms in their queries [Huang et al. 2003]. LOBSTER suggests query terms to users in the three components where text can be entered freely—Main topic of LO, Additional terms, and Programming language. The term, LOBSTER, suggests depend on which component the user is filling out. For example, in Programming language only the names of programming languages are suggested, while in Main topic of LO suggested terms are topics common to computer science, excluding the names of programming languages. The terms suggested in Main topic of LO and Additional terms were extracted from course descriptions, syllabuses and other printed material used at the university for beginning and intermediate computer science courses. These terms were then translated to English and in this way LOBSTER is able to suggest ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:15

terms in both English and Spanish. The Programming language component’s terms were extracted from the Open Directory Project,7 however these were not translated because the names of programming languages are the same in both English and Spanish. These components use dynamic query term suggestions which list terms that contain the letters as the user is typing them. This type of term suggestions are an intermediate solution between requiring a user to think of the terms and their spelling and selecting terms from a long list of suggestions [Hearst 2009, p. 105]. To address the requirement for support for searches in multiple languages indicated by comments 14-17, we included several components in LOBSTER that assist users. We based the selection of the languages that require support on the responses participants provided in the questionnaire from the first part of the study. When asked to indicate all the languages, they preferred to write their search queries, all 30 indicated Spanish and 23 also indicated English. When asked what language they preferred for the LOs, 14 specified Spanish, 1 indicated English and the remaining 15 indicated both Spanish and English. From this, we concluded that assistance was needed in both English and Spanish and that this assistance was important not only during query specification but also during results examination. Therefore, we provided language assistance through bilingual term suggestion, translation of query terms and search language selection. Two of LOBSTER’s components provide query term translation: Main topic of LO and Additional terms. In each of these components, the terms the user writes are automatically translated into both English and Spanish as soon as he finishes typing them and moves to another one of LOBSTER’s components. This means that initiating translations do not require an extra effort from the user. LOBSTER takes the terms written in the textbox and initiates two translations, first from English to Spanish and then from Spanish to English. Both the translations are performed regardless of the the language of the original terms. This technique was used because we found that the Google translation API only translates words when the source language is specified correctly, otherwise, it leaves the terms unchanged. Then, users can edit these translations so that they more accurately reflect the description of the LO. This technique, where the user can edit proposed translations, allows not only for improving the translation but also prompts users to rethink their original query [Petrelli et al. 2006]. Search language selection is provided when the user selects the desired search language in LO languages. When a search is initiated, the query forwarded to the search engine is formed with terms written in the selected language. In addition, before the search is performed, Google is configured with the preferred language. One helpful and standard feature in LOBSTER, not found or not a default option in popular search engines, is that, it can perform searches in both languages simultaneously and presents the results clustered by language, when requested by the user. 4.4 Searching with LOBSTER

With the description of the LO in Figure 3, we exemplify the tasks that occur behind the scenes of the search for an LO using LOBSTER. In the example, when LOBSTER assembles the queries for the LOs in English, it begins with the term Java, which the user wrote for Programming Language. Then, it takes the terms in English linked lists, that describe Main topic of LO. Before adding these terms to the query, LOBSTER places them in quotation marks since the user checked the Exact phrase box and this is the syntax required by Google. Then, LOBSTER takes the term Examples written in Additional Terms. Thus, we obtain the segment of the query that is common to all 7

http://www.domz.org

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:16

C. Curlango-Rosas et al.

Fig. 3. Description of an LO on LOBSTER.

Fig. 4. Example of resulting queries in English sent to Google.

Fig. 5. Example of resulting queries in Spanish sent to Google.

English LOs in the description: Java ”linked lists” Examples. This can be ascertained in the queries shown in Figure 4. Similarly, when LOBSTER assembles the queries for the LOs in Spanish, it takes the term Java which the user wrote for Programming Language. But, it takes the terms in Spanish listas ligadas, that describe Main topic of LO. Before adding these terms to the query, LOBSTER also places them in quotation marks. Then LOBSTER takes the term ejemplos written in Additional Terms. Again we obtain the part of the query that is common to all types of Spanish LOs in the description: Java ”listas ligadas” ejemplos. This can be verified in the queries shown in Figure 5. To complete the queries, LOBSTER adds LO format specifications where appropriate. In the example, all formats are selected. For the LO format Documents, two separate queries are sent to Google. The first lists all preferred document file types using Google’s advanced query syntax: (filetype:doc OR filetype:odf OR filetype:pdf OR filetype:txt OR filetype:docx). The first and last file types correspond to Microsoft Word, ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:17

Fig. 6. LOBSTER user interface showing results of example search. (A) Language tabs, (B) LO format tabs, (C) Listing of search results, (D) Original LO description.

while the rest correspond to Open Office Writer, Adobe Acrobat portable document format, and pure text files. The second file type is HTML, the Web’s native format. For this, no further specification is added to the query. For the LO format Presentations, LOBSTER adds a list of common presentation formats: (filetype:ppt OR filetype:odp OR filetype:swf). These correspond to Microsoft PowerPoint, Open Office Impress, and Macromedia Small Web Format type files respectively. The LO format Interactive, also requires two separate file format specifications. The first is simply: filetype:swf. The second interactive LO format is Java Applets, this type of LO is found embedded within Web pages, that is, it is not found as a stand-alone file. Therefore, to retrieve this type of LO, the term applet is added to the query. The queries for the final two LO formats, Images and Video do not require additional terms beyond what LOBSTER composed for the first segment. The reason for this is that to find these types of LOs, LOBSTER uses the Google Image Search and the Google Video Search components, which are part of the Google AJAX Search Application Programming Interfaces or API. Each one of the 14 queries LOBSTER assembles (Figures 4 and 5) in the example, is sent to a separate Google search component. Queries for Documents, Presentations, and Interactive LOs are sent to Google Web Search components. While queries for Images and Video LOs are sent to Google Image Search and Google Video Search components respectively. Each of the search components is configured to indicate the preferred language for the LO. When Google returns the results, LOBSTER takes and organizes them clustered by language (Figure 6(A)) and then by LO file type (Figure 6(B)). Each query’s results are placed in a separate clearly labeled tab such that the user can quickly identify the types of results in each listing (Figure 6(C)). The organized search results appear below the original LO description. An important feature of LOBSTER is that the user does not lose sight of the description and can easily modify it to refine the search shown in (Figure 6(D)). By showing both the description and the search results on the same screen, the cognitive load on the user is reduced as compared to Google Advanced search interface because the user does not have to remember what terms he wrote in the textboxes that led to the resulting query and search results. On Google Advanced search interface, when the search results are shown, they replace the form the user ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:18

C. Curlango-Rosas et al. Table X. Actions on Google Required to Make an Equivalent Search on LOBSTER Action (1) Access Google images search on two browser windows. (2) Select English as the search language on one of the windows. (3) Select Spanish as the search language on the other window. (4) Access Google video search on two browser windows. (5) Select English as the search language on one of the windows. (6) Select Spanish as the search language on the other window. (7) Access Google Web search on ten browser windows. (8) Select English as the search language on five of the windows. (9) Select Spanish as the search language on the five remaining windows. (10) Type the queries shown in Tables 4 and 5 in the corresponding browser windows. (11) Initiate each of the fourteen searches on the fourteen separate windows.

filled out with his query and to return to it, the user must press the back button. This is similar to the split attention effect described in Mayer and Moreno [2003] because the searcher’s visual attention is split between viewing the advanced search form and viewing the search results. The split attention effect taxes the user as his capacity for mentally holding and manipulating words and images in memory is limited thus his cognitive load is greater. A reduction in cognitive load is a desirable characteristic of user interfaces since the more a user must remember, the greater the probability that he will make a mistake with a system [Pressman 2006]. If the user were to search for the the LO described in Figure 3 using Google, he would have to do all the actions shown in Table X. Clearly, LOBSTER’s user interface is more user friendly as it is able to integrate all these actions into one seamless process on a single browser window. When using Google to perform the search for the LO, the user needs to keep track of 14 separate, independent browser windows. Also, if the user needed to modify the search, perhaps to add a term, this would require accessing each of the 14 browser windows and typing in the new term and refreshing each window one by one. Whereas LOBSTER allows for easy modification of all searches since that modification would require typing the term only once on a single window. 4.5 Success in Finding LOs

Research questions 2 and 5 referred to participants success in finding LOs with Google and with the specialized search assistant LOBSTER. The analysis began by comparing the number of participants that were able to successfully find LOs using LOBSTER versus Google. When participants searched for the LOs they described in the first part of the session, 24 (80%) were successful using Google, (with the Basic and/or Advanced interface), while 29 (96%) were successful using LOBSTER. We applied a z-test for two proportions to test their significance [Moore and McCabe 2006] and found the difference to be statistically significant with p < 0.037. When participants searched for the LOs we described to them in the second part of the sessions, 12 (40%) were successful using Google, compared to 20 (66%) using LOBSTER. The difference is also statistically significant with p < 0.032. Most participants (80%) were able to find the first LOs easily with Google and this task was facilitated even more with LOBSTER. The second LO was more difficult for participants to find with Google. This could be due to the fact that the first LO could be found as a text-based LO which is much easier to index by search engines since its content can be easily analyzed by known algorithms. The second LO however, was an animation which is more difficult to search for since its indexing is based on tags assigned by others to describe its content. ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:19

Table XI. Search Times in Minutes for the First LO Search Tool

Mean

Std. Dev.

Min.

Q1

Median

Q3

Max.

Google LOBSTER

11.93 8.39

10.64 11.12

0 0

4 2

8.5 3.0

18 11

47 42

Table XII. Search Times in Minutes for the Second LO Search Tool

Mean

Std. Dev.

Min.

Google LOBSTER

35.83 33.11

20.81 27.62

1 6

Q1 21 9.75

Median 33.5 26.5

Q3 51 42.75

Max. 94 103

4.6 Search Times

Also related to success in searching for LOs, is the time it takes to find them. To determine whether there was a significant difference between LOBSTER and Google, we compared the times participants required to locate the first LOs. Table XI shows mean times and standard deviations obtained for the search for the LOs participants described as well as other descriptive statistics such as first, second and third quartiles, minimum and maximum times. These values show a decrease in search time when using LOBSTER. However the values also indicate that they are not symmetrically distributed. The distribution is skewed to the right. It is not unusual to find skewed search times [Jansen and Spink 2005; Xu and Mease 2009]. To address skewness and outliers, we applied a logarithmic transformation. Then, we used a matched-pair ttest. The analysis revealed a statistical difference with p < 0.022 in the mean time to look for the first LO using LOBSTER (8.39 minutes) as compared to Google (11.93 minutes). We also compared search times for the LO we described to participants. Table XII shows mean times and standard deviations obtained for the search for the LOs we described to participants. It also shows minimum and maximum times in addition to the first, second and third quartiles. These values show a decrease in search time when using LOBSTER. In this case, the values for the first, second, and third quartiles do not indicate that the distribution is skewed, however, the presence of outliers requires that we also apply logarithmic transformation. After the transformation, we used a matched-pair t-test and determined there was no significant difference between LOBSTER (33.11 minutes) and Google (35.83 minutes). A deeper analysis of the search times for the second LO revealed some interesting insights as to why the difference in search times between Google and LOBSTER were not significant. Among the eight participants who succeeded in finding the LO with LOBSTER, only two of the participants spent more time searching with LOBSTER than with Google; one took 8 more minutes, the other 28 minutes. Among the 10 participants who did not find the LO with either LOBSTER or Google, three participants spent more than twice the time with LOBSTER than Google trying to find the LO. Among the 12 participants who successfully found the LO with both LOBSTER and Google, the mean difference was 7.73 minutes. We believe the search times for the second LO using Google could have been longer but were cut short due to participants giving up after continually receiving unsatisfactory search results. On LOBSTER however, the fact that they received even slightly satisfying results encouraged them to keep searching which led to their eventually finding the LOs. 4.7 Search Strategies

In this section, we compare how participants searched for the LOs in our study with results reported in Jansen et al. [2000]. It is important to point out that the Jansen et al. ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:20

C. Curlango-Rosas et al. Table XIII. TAM Questionnaire Results for Perceived Ease of Use and Usefulness Median(Mean) Statement

Google

Lobster

S1 S2 S3 S4 S5 S6

5(5.03) 5(4.67) 5.5(5.20) 5(4.83) 6(5.50) 6(5.30)

7(6.60) 6(6.07) 7(6.63) 7(6.60) 7(6.60) 7(6.73)

S7 S8 S9 S10 S11 S12

5.5(5.03) 5(5.17) 5.5(5.33) 5(5.30) 6(5.47) 6(6.30)

6(6.13) 6(6.33) 6(6.03) 6(6.07) 6(6.13) 7(6.60)

[2000] study drew their data from search engine logs and thus involved thousands of queries and users. In our case, the 30 participants made just over 500 queries during the entire study. We report average values and show standard deviation in parenthesis. One of the metrics found in the literature is the number of queries users make during search sessions, this is reported as 1.6 queries per session. In our study, the number of queries varied depending on the search task and the search tool. When participants searched for the first LO using Google, they wrote an average of 4.14 (4.29) queries but when they used LOBSTER they wrote an average of only 2.46 (3.04). This decrease is not surprising considering that they found the LO faster with LOBSTER. When participants searched for the second LO, they used an average of 12.96 (9.11) queries with Google and 10.29 (7.85) with LOBSTER. This higher number of queries reflects the difficulty that finding the second LO posed for participants. The number of terms per query that participants used in the study also differs significantly from the 2.21 terms reported in the literature. For the first task, subjects wrote an average of 5.00 (1.90) terms per query under Google and 6.30 (2.51) terms with LOBSTER. Participants actually used slightly fewer terms to search for the second LO with LOBSTER 5.66 (2.72) than with Google 5.84 (2.21). Finally, the reported number of results pages with ten results each that users review during their searches sessions is 2.35. Participants in our study consulted an average of 1.09 (0.44) pages with Google and 2.77 (2.61) with LOBSTER for the first search task. For the second task, only 0.91 (0.60) pages were consulted with Google. This low value is a result of participants issuing queries but not reviewing the results. On LOBSTER, participants viewed an average of 3.73 (1.63) results pages. We should point out that on LOBSTER, results pages show 8 results per page. 4.8 User Acceptance

We obtained the responses to research questions 3-4 and 6-7 from the Technology Acceptance Model section of the exit questionnaires. Table XIII shows the median and mean of the responses for the statements regarding participants’ perceptions of both tools’ ease of use (S1-S6) and usefulness (S7-S12). Responses were averaged to obtain a score for usefulness and one for ease of use for Google and for LOBSTER in a manner similar to Capra et al. [2007] and Ortega et al. [2007]. These scores along with their standard deviation are shown in Tables XIV and XV. To determine whether the differences among the scores were significant, we used ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:21

Table XIV. Ease of Use Scores for TAM Questionnaire Search Tool

Mean

Std. Dev.

Google LOBSTER

5.089 6.539

1.095 0.471

Table XV. Usefulness Scores for TAM Questionnaire Search Tool

Mean

Std. Dev.

Google LOBSTER

5.433 6.217

0.820 0.578

a matched-pair t-test and found a significant difference ( p < 0.001) in ease of use for LOBSTER (6.539) as compared to Google (5.089). On the seven-point Likert scale, we presented participants, a value of 5 corresponds to Slightly agree, while 6 corresponds to Agree. The mean values we obtained show that users agreed with the statements regarding LOBSTER’s ease of use compared to Google’s ease of use where participants slightly agreed with the statements. There was also a statistical difference ( p < 0.001) between usefulness for LOBSTER (6.217) and Google (5.433). Again we found that participants on average agreed that LOBSTER was useful and only slightly agreed that Google was useful for finding LOs. The differences in participants perception of Google and LOBSTER’s ease of use and usefulness were probably due to the fact that more users found LOs with LOBSTER than with Google. In addition, even among participants who did not find the second LO with LOBSTER, many found LOs that matched several of the characteristics they were required to find. 5. LIMITATIONS OF THE STUDY

One of the limitations of the study is that the study focused on only one search engine, Google. One reason we chose to use only one search engine was to control this variable to ensure that different success rates among participants was not due to the search engines’ differences in the indexing and ranking mechanisms. Given that in a study comparing the first page of results from the four major search engines [Spink et al. 2006] found that only 1.1% of the results were common to all four and that 84.9% of results were found in only one of the search engines, this effect was highly likely to have affected results had we used a variety of search engines. However, this also indicates that we must explore whether users will be better able to find LOs using different search engines as we point out in the future work section. The fact that our study focused on users from one particular field of study, computer science, is also a limitation. As we indicated previously, this was part of the design of the study which helped us compare results among the participants and focus the design of the search tool’s features, for example, text suggestion. The intent of working with faculty from computer related disciplines was to control the variable of computer literacy as a factor for the study, allowing us to focus on the strategies they used to search for LOs. Part of our future work will be focused on working with participants from other fields of study and adjusting LOBSTER accordingly. The second part of the study taking place six months after the first part could be considered a limitation because (a) Google’s ranking algorithm is constantly being improved, (b) participants with good memories could remember the trails they followed during searches with Google and use this knowledge to find the LOs more easily with LOBSTER, (c) participants could have become more skilled at searching, and (d) participants could have acquired more experience in the subject matter. However, we ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:22

C. Curlango-Rosas et al.

found that this was not the case. As reported in Spink et al. [2006], there was only a slight change in the first page of results on the search engines they tested (including Google) from April to July 2005. In addition, we reviewed the LOs that participants found in the first search tasks for both tools and found that they found different LOs in each search session. We also reviewed the second search tasks of those users who found the LOs with both tools. We found that only five of the participants found the same LO with both tools. We compared the queries they used and found that four of them used queries that were similar (though not identical) in both searches. However, the search trails they followed were completely different. From this, we conclude that they did not memorize or accurately remember the search trails that led them to successfully find the LOs in the first part of the study. This conclusion is further supported in Teevan [2008] whose participants could only describe 15% of a list of search results that had been seen a few hours before. In terms of participants becoming more skilled at searching during the six-month period between the two parts of our study, data analysis did not show that participants used more advanced search strategies in the second part of the study as compared to the first. In fact, participants used similar terms in both parts of the study, leading us to conclude that they would have obtained similar results (finding a small number of LOs) had it not been for the use of LOBSTER which expanded their queries with additional terms and used advanced search syntax. Finally, while participants could have acquired more experience in the subject matter, we believe this was mitigated by the fact that participants were computer science and engineering faculty and that the subject matter selected for the LOs is from the basic to intermediate portion of most computer science and engineering curriculums. 6. DISCUSSION

As the Web grows in diversity and volume of content, it is clear that the one size fits all proposition of general purpose search engines needs to be revisited. Already there exist specialized search engines to assist users to search for their travel and shopping needs. Other areas of specialization are sure to follow. One area that needs attention is the educational field, specifically the search for learning objects on the Web. In this study, we found that by using the specialized search assistant LOBSTER, a greater number of participants were able to find the two LOs they were asked to find than when they used the popular search engine Google directly. In general users spent less time searching for the LOs using LOBSTER. We think this is due to (1) participants specifying the LO formats they were looking for and LOBSTER generating a more specific query, (2) participants being able to search for LOs not only in their native Spanish, but also in English; thus they had access to a considerably larger set of results, and (3) LOBSTER presenting search results in several clearly labeled categories along with the interface where participants wrote their queries. The set of assistive components in LOBSTER provides support throughout the search process. During query specification, the textboxes identify the type of query terms that the user should write. We believe that these guide users to write the kinds of terms that will lead to better search results in much the same way that by providing a larger text entry area, users provided longer queries as reported in Kelly et al. [2005]. But LOBSTER does not just provide a visual representation of the query’s components, it also provides assistance though dynamic query term suggestion. In this way even if the user is somewhat unsure of the terms that he should write, he can be helped as he is writing each letter. This is especially important for non native speakers of English who attempt to write a query in that language. As pointed out in Hearst [2009, p. 74], people find it easier to look at something and recognize it, than to describe it. Thus if a user sees the terms he is thinking about it will be easier for him to recognize them. ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:23

For foreign language learners this is especially important since they find it easier to read a sentence in the foreign language than to write the sentence by themselves. In terms of assisting users with creating queries using advanced search syntax, LOBSTER facilitates this and in fact hides the complexity of building these types of queries. We found that this was required not only from participants’ comments but also from the queries they wrote during the first part of the study. This support is important especially for users for whom English is not their first language because they tend to write Boolean operators in their own language even though search engines do not recognize them [Cacheda and Via 2001]. LOBSTER also performs a task that though important, especially for the multilingual search, is seldom done by users, namely configuring the search engine so it returns results in specific languages [Jansen and Spink 2005]. Throughout this study, we were particularly sensitive to the issues surrounding searches in English and Spanish. This was not only due to the fact that our participants live in close proximity to the United States and are thus influenced by that culture. We believe this is an important issue as evidenced by the current trend toward globalization. Cross-lingual information retrieval has changed and evolved along with the Web. At first searches were performed by monolingual users who looked for documents in languages they could not read. Currently, the trend is focused on facilitating searches made by multilingual users who search for documents in languages that they do understand [Sigurbjornsson et al. 2005]. Indeed, Jansen and Spink [2005] highlighted the importance of studying users from different parts of the world to better understand their search behavior in order to design search engines that take this into account. However, we were not able to find any studies addressing the needs of users searching for LOs when English is not their first language, although Seyedarabi [2006] provides an option for searching for LOs for students of English as a second language. Facilitating bilingual searches is a useful feature that search engines should provide in view of the changing language of the Web [Chen and Bao 2009]. The number of people who speak English as a second language will exceed the number of native speakers [Graddol 2000]. It is not difficult to imagine the difficulties faced by non-native speakers as they try to formulate queries in English. Specifically providing support for Spanish speakers is important since Spanish is among the top three languages in use by Internet users [Internet World Stats 2010]. It is in this spirit that we provided assistance with bilingual search in LOBSTER. While Google allows users to establish preferences as to the languages of retrieved search results, query terms are not translated to the preferred languages during the search like in LOBSTER. Google provides a separate interface, that is one that is not automatically part of the basic search, but rather the user must take an extra step to access this feature. This interface, Google language tools,8 translates query terms from one language to another and then performs searches in both languages. Other search sites which perform this same function are 2lingual,9 and Babelplex.10 Both return results from Google. These three bilingual search sites are missing the term suggestion feature found in LOBSTER. Although Google Web search does provide term suggestion, one of LOBSTER’s strengths is that it suggests terms that are context sensitive in both English and Spanish simultaneously and that the suggested terms are from the specialized context of the subject area. LOBSTER goes beyond providing automatic term translation. It also provides the user with support for editing the translated terms in a way that differs from the other 8 http://www.google.com/language

tools

9 http://www.2lingual.com 10

http://babelplex.com

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:24

C. Curlango-Rosas et al.

bilingual search engines mentioned. With LOBSTER, when the user does not agree with a translated term in either of the two languages, he is able to edit them independently and these changes are used in the searches for all the LO formats. On 2lingual, however, when the user edits the translations in one of the languages, the other language is automatically updated with a translation of the newly edited term. For example, if the user types for loop in the English textbox, 2lingual translates these terms to: para el bucle. This is a literal translation of the terms and is not a correct translation. If the user edits the Spanish translation to: ciclo for, 2lingual responds with the following translation to English loop, which loses the original sense of the terms. Since the translations are automatic and the user has no control over what gets translated, the user ends up having to accept an accurate search in one language and an inaccurate search in the other. On Babelplex, like on LOBSTER, once terms are translated, the translations are independent allowing for separate editing. Before the translation process is initiated on Babelplex, the user must choose the language for the original query and the language for the translation. Granted, Babelplex is able to translate among multiple languages, so it is understandable that the user choose the languages beforehand. However, on LOBSTER the user does not need to differentiate between the language of the original query and that of the translation, the user simply writes the terms in either Spanish or English and LOBSTER determines what terms need to be translated. In this way LOBSTER reduces the number of things that the user needs to keep track of or configure and can simply concentrate on the task of searching for the desired LO. In terms of how search results are presented, there are similarities between Babelplex and LOBSTER. Both group search results according to language and type of result. On Babelplex, the types are Web, Image, Video and Wikipedia. While on LOBSTER the types are documents in different file types, presentations, two types of interactive files, images and videos. In a way, Babelplex breaks up HTML documents into two categories, which are Web and Wikipedia. LOBSTER on the other hand, clusters Web content further by providing easy access to .doc, .pdf, .swf, applets, and .ppt to name a few. Accessing these same types of files from Babelplex requires typing in a query that specifies the file types as shown in Figures 4 and 5. Therefore, LOBSTER provides easier access to more different types of content simultaneously than Babelplex. Finally, we present some of the comments participants wrote when they answered the questionnaire after having used LOBSTER to search for LOs in the second part of the study. One participant wrote “it is easy to use and very friendly.” Another indicated that with it “one can make more precise searches.” Yet another wrote, “I do not consider myself good at searching, but it (LOBSTER) is easy to manipulate and I believe after using it for a while I would be able to make faster and more precise searches.” Comments were positive indicating participants found LOBSTER easy to use and helpful. There were also some comments that suggested ways to improve the user interface such as changing the color of visited links and providing an option for selecting all LO formats with a single click. 7. CONCLUSIONS AND FUTURE WORK

A key contribution of this work is how the set of assistive components work together to support teachers during the entire process of searching for LOs. That is, teachers receive support from the moment they begin writing their query, through the textboxes that not only direct them to writing complete descriptions of their LO requirements but also assist them by suggesting appropriate query terms. In addition, support for creating the more powerful advanced searches is provided in an almost transparent way, through automatic query completion. When teachers are ready to browse through the ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:25

search results, they are able to access these in a categorized manner that clearly shows the types of LOs that are found in each category. Finally, when it is necessary to modify a query and relaunch a search, the user interface supports this process by always keeping the previous query visible in the same state that the user left it, eliminating the need for users to remember what query terms and options led to the current state of results. This study has provided valuable insights into the way teachers search for LOs. More importantly it has helped to identify, and test, several ways to help teachers find LOs on the Web, such as translation of query terms to English and Spanish, bilingual topic specific term suggestion, bilingual searches, and clustering of results according to language and LO type. While some of these features, such as translation of query terms and bilingual searches, are available on public search engines (Google, 2lingual and Babelplex), the way we implemented them on LOBSTER gives users the flexibility of selectively editing their query terms to complement the automatic translations while still retaining access to bilingual searches. LOBSTER provides more focused support in the case of term suggestion, since the terms are suggested in both Spanish and English and are specifically directed to the technical field we focused on for our study. Finally, LOBSTER’s search interface supports advanced searches in a manner that is unobtrusive and transparent to the user, allowing them to select among different LO formats by simply selecting check boxes. Facilitating users’ access to advanced search promotes its use, which leads to better results. Improving the quality of queries through the use of advanced search operators is a research area that has been overlooked by the community [White and Morris 2007]. The LOBSTER search assistant helps users find LOs more successfully than using Google directly. There is also a significant difference in the time required to find some LOs between using Google and using LOBSTER. Not only that, but users found LOBSTER significantly more useful and easier to use than Google. This is an important finding because these two factors are correlated to intention to use Davis [1989] and from which we infer that participants are likely to continue using LOBSTER to search for LOs for the classes they teach. However, the question remains as to whether search results can be further improved by adopting other ways of describing the LOs being sought. For example, the six categories of LOs proposed in Churchill [2007] could be used as search options that would consolidate the information that LOBSTER users currently specify through the LO Formats and Additional Terms components. This would not only simplify the user interface, but would provide a richer description which could translate into obtaining better search results. In addition, for users who might not be as familiar with the file formats listed in the LO Formats component, it might be helpful for them to be able to describe the LOs by simply selecting one of the categories and having LOBSTER map these categories to the terms and file formats in which these types of LOs would most likely be found. Since our study focused exclusively on computer science and engineering professors, future versions of LOBSTER will concentrate on expanding the assistance facilities to provide coverage for other fields of study and languages. To do this, however, further work is needed in order to (a) characterize how non computer science professors search for LOs, (b) identify the obstacles they face when searching for LOs, and (c) identify ways to overcome these obstacles by leveraging existing tools and/or creating new ones to provide assistance for them. In addition, the LOBSTER user interface itself needs to be reevaluated, since it is so closely linked to the computer science field. Future work should also determine whether the components used for the user interface are appropriate for non computer science users and if not, what components are better suited for these users. ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:26

C. Curlango-Rosas et al.

In addition, we plan to explore expanding LOBSTER’s searches to include other popular public search engines, Yahoo!, in order to help users obtain a wider coverage of the LOs found on the Web. As demonstrated in White et al. [2008], even though the majority of users use a single search engine, they might benefit if they switched search engines for about half their queries. One way we envision exploring this is through the use of metasearch engines [Spink et al. 2006] that gather results from search engines that provide coverage of a significant number sites containing LOs. A final issue we plan to address in our future work with LOBSTER is facilitating access to the benefits of the much touted wisdom of crowds through the use of social searching. This was one of the points that participants requested in their suggestions for improving the search for LOs, but that we decided to implement at a later time. We believe this is an important issue that warrants further study given the current growth in the use of social networks and the availability of social bookmarking tools such as Delicious11 and Xmarks.12 Our work highlights some aspects that need improvement in the search for LOs. Even though we have focused on the Web, this work could easily be extended and applied to the search for LOs in learning object repositories (LOR) and learning management systems (LMS). The current trend of building federations of LOR is creating ever larger repositories, which though hardly approaching the size of the Web, nevertheless require better search facilities. The goal of these search facilities should be to assist and guide users so they can construct queries that can scour the LOR and return those LOs that most closely meet their criteria. Current search facilities in LOR could be redesigned so that they include components, like those in LOBSTER, that provide assistance and guidance to the user when specifying the query and when browsing through the results. Sharing of resources, specifically LOs, stored on LMS could also be improved by implementing search mechanisms like the ones we propose. In addition, it would be possible to implement social searching by leveraging the LMS user database in conjunction with the LO database. Also, by combining searches in all three platforms, the Web, LOR, and LMS, users would be able to access a broader pool of LOs. In the end however, successful searching depends not only on having a plentiful source of LOs, but also having a mechanism for wading through the pool of LOs to find exactly what one is looking for. REFERENCES B ELKIN, N. J. 2000. Helping people find what they don’t know. Comm. ACM 43, 8, 58–61. B ELKIN, N. J., K ELLY, D., K IM , G., K IM , J.-Y., L EE , H.-J., M URESAN, G., T ANG, M.-C. M., Y UAN, X.-J., AND C OOL , C. 2003. Query length in interactive information retrieval. In Proceedings of SIGIR ACM, 205–212. B ROISIN, J. AND V IDAL , P. 2006. A management framework to recommend and review learning objects in a web-based learning environment. In Proceedings of the 6th International Conference on Advanced Learning Technologies. 41–42. C ACHEDA , F. AND V IA , A. 2001. Understanding how people use search engines: A statistical analysis for e-business. In Proceedings of the e-Business and e-Work Conference and Exhibition. pp. 319–325. C APRA , R., M ARCHIONINI , G., O H , J. S., S TUTZMAN, F., AND Z HANG, Y. 2007. Effects of structure and interaction style on distinct search tasks. In Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL’07). ACM, New York, 442–451. C HEN, H. AND D UMAIS, S. 2000. Bringing order to the web: Automatically categorizing search results. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’00). ACM, New York, 145–152. C HEN, J. AND B AO, Y. 2009. Cross langage search: The case of google language tools. First Monday 14, 3. 11 http://www.delicious.com/ 12

http://www.xmarks.com

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:27

C HURCHILL , D. 2007. Towards a useful classification of learning objects. Educat. Tech. Res. Develop. 55, 5, 479–497. C ISCO -S YSTEMS. 2001. Reusable learning object strategy. Designing information and learning objects through concept, fact, procedure, process, and principle templates. http://www.cisco.com/warp/public/10/wwtraining/elearning/implement/rlo strategy.pdf (accessed 5/02). D AVIS, F. D. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 13, 3, 319–339. D OORTEN, M., G IESBERS, B., J ANSSEN, J., D ANIELS, J., AND K OPER , R. 2004. Transforming Existing Content into Reusable Learning Objects. Routledge, New York, NY, Chapter 9. D OWNES, S. 2004. Learning Objects, Resources for Learning Worldwide. Routledge, New York, NY, Chapter 1. D UMAIS, S., C UTRELL , E., AND C HEN, H. 2001. Optimizing search by showing results in context. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’01). ACM Press, New York, 277–284. D UNNING, J. 2002. Talon learning object system. http://www.indiana.edu/scstest/jd/learningobjects.html (accessed 4/03). FARRELL , R. G., L IBURD, S. D., AND T HOMAS, J. C. 2004. Dynamic assembly of learning objects. In Proceedings of the 13th International World Wide Web Conference on Alternate Track Papers & Posters (WWW Alt.’04). ACM Press, New York, 162–169. F RIESEN, N. 2001. What are educational objects? Interac. Learn. Environ. 9, 3, 219–230. G RADDOL , D. 2000. The Future of English? A Guide to Forecasting the Popularity of the English Language in the 21st Century. The British Council. H ARGITTAI , E. 2004. Classifying and coding online actions. Soc. Sci. Comput. Rev. 22, 2, 210–227. H ASSAN, S. AND M IHALCEA , R. 2009. Learning to identify educational materials. In Proceedings of the Conference on Recent Advances in Natural Language Processing (RANLP). H EARST, M. A. 2009. Search User Interfaces. Cambridge University Press, Cambridge, UK. ¨ H OLSCHER , C. AND S TRUBE , G. 2000. Web search behavior of internet experts and newbies. In Proceedings of the 9th International World Wide Web Conference on Computer Networks. North-Holland Publishing Co., Amsterdam, The Netherlands, 337–346. H UANG, C.-K., C HIEN, L.-F., AND O YANG, Y.-J. 2003. Relevant term suggestion in interactive web search based on contextual information in query session logs. J. Amer. Soc. Inf. Sci. Tech. 54, 7, 638–649. IEEE. 2002. IEEE standard for learning object metadata. IEEE standard 1484.12.1. Tech. rep., IEEE. I NTERNET W ORLD S TATS. 2010. http://www.internetworldstats.com/languages.htm (accessed 2/10). J ANSEN, B. J. AND S PINK , A. 2005. An analysis of web searching by european alltheweb.com users. Inf. Process. Manage. 41, 2, 361–381. J ANSEN, B. J., S PINK , A., AND S ARACEVIC, T. 2000. Real life, real users, and real needs: A study and analysis of user queries on the web. Inf. Proc. Manage. 36, 2, 207–227. ¨ , M. 2005a. Findex: Search result categories help users when document ranking fails. In Proceedings K AKI of the SIGCHI Conference on Human Factors in Computing Systems (CHI’05). ACM Press, New York, 131–140. ¨ , M. 2005b. Optimizing the number of search result categories. In Proceeding of Human Factors in K AKI Computing Systems (Extended Abstract) (CHI’05). ACM, New York, 1517–1520. K ELEBERDA , I., R EPKA , V., AND B ILETSKIY, Y. 2006. Building learner’s ontologies to assist personalized search of learning objects. In Proceedings of the 8th International Conference on Electronic Commerce (ICEC’06). ACM, New York, 569–573. K ELLY, D., D OLLU, V. D., AND F U, X. 2005. The loquacious user: a document-independent source of terms for query expansion. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’05). ACM, New York, 457–464. K ELLY, D., G YLLSTROM , K., AND B AILEY, E. W. 2009. A comparison of query and term suggestion features for interactive searching. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’09). ACM, New York, 371–378. K ELLY, D., C USHING, A., D OSTERT, M., N IU, X., AND G YLLSTROM , K. 2010. Effects of popularity and quality on the usage of query suggestions during information search. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, 45–54. K OPER , R. 2001. Modelling units of study from a pedagogical perspective: The pedagogical meta-model behind EML. Heerlen. Open University of the Netherlands. http://eml.ou.nl/introduction/docs/pedmetamodel.pdf (accessed 6/02).

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

21:28

C. Curlango-Rosas et al.

K OPER , R. 2003. Combining Reusable Learning Resources and Services with Pedagogical Purposeful Units of Learning. Kogan Page, London, 46–59. K OPER , R. AND VAN E S, R. 2004. A First Step Towards a Theory of Learning Objects. Routledge, New York, NY, Chapter 3. L EARNING A LBERTA. 2002. Learn alberta glossary. http://www.learnalberta.ca/l (accessed 4/03). L IAW, S.-S. AND H UANG, H.-M. 2003. An investigation of user attitudes toward search engines as an information retrieval tool. Comput. Hum. Behav. 19, 6, 751–765. M AYER , R. E. AND M ORENO, R. 2003. Nine ways to reduce cognitive load in multimedia learning. Educa. Psych. 38, 1, 43–52. M C G REAL , R., Ed. 2004. Online Education Using Learning Objects. Routledge, New York, NY. M OORE , D. S. AND M C C ABE , G. P. 2006. Introduction to the Practice of Statistics. W. H. Freeman and Company. M ORALES, R., O CHOA , X., S ANCHEZ , V. G., AND O RDONEZ , V. 2009. La flor-repositorio latinoamericano de objetos de aprendizaje. In Recursos Digitales para el Aprendizaje. Ediciones de la Universidad Autonoma de Yucatan, 308–317. M ORRIS, M., M ORRIS, M. G., AND D ILLON, A. 1997. The influence of user perceptions on software utilization: Application and evaluation of a theoretical model of technology acceptance. IEEE Softw. 14, 58–76. M ORTIMER , L. 2002. (Learning) objects of desire: Promise and practicality. Learning Circuits, http://www.learningcircuits.org/2002/apr2002/mortimer.html (accessed 6/03). M OSTAFA , J. 2005. Seeking better web searches. Sci. Amer., 66–73. N ASH , S. S. 2005. Learning objects, learning object repositories, and learning theory: Preliminary best practices for online courses. Interdisciplinary J. Knowl. Learn. Obj. 1. N EVEN, F. AND D UVAL , E. 2002. Reusable learning objects: A survey of lom-based repositories. In Proceedings of the 10th ACM International Conference on Multimedia (MULTIMEDIA’02). ACM, New York, 291–294. O CHOA , X. AND D UVAL , E. 2006. Use of contextualized attention metadata for ranking and recommending learning objects. In Proceedings of the 1st International Workshop on Contextualized Attention Metadata: Collecting, Managing and Exploiting of Rich Usage Information (CAMA’06). ACM, New York, 9–16. O RTEGA , B. H., M ARTINEZ , J. J., AND H OYOS, M. J. M. D. 2007. Aceptacion empresarial de las tecnologias de la informacion y de la comunicacion: Un analisis del sector servicios. J. Inf. Syst. Technol. Manage. 4, 1, 3–22. P ETRELLI , D., L EVIN, S., B EAULIEU, M., AND S ANDERSON, M. 2006. Which user interaction for crosslanguage information retrieval? Design issues and reflections. J. Amer. Soc. Inf. Sci. Technol. 57, 5, 709–722. P OLSANI , P. R. 2004. A First Step Towards a Theory of Learning Objects. Routledge, New York, NY, Chapter 8. P RESSMAN, R. 2006. Software Engineering: A Practitioner’s Approach. McGraw-Hill Science/Engineering/ Math. Q UINN, C. AND H OBBS, S. 2000. Learning objects and instructional components. Educational Technology and Society 3, 2. R ECKER , M., D ORWARD, J., D AWSON, D., H ALIORIS, S., L IU, Y., M AO, X., PALMER , B., AND PARK , J. 2005. You can lead a horse to water: Teacher development and use of digital library resources. In Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL’05). ACM, New York, 1–8. R EHAK , D., AND M ASON, R. 2003. Keeping the Learning in Learning Objects. Kogan, London, 20–34. R IEGER , O. 2009. Search engine use behavior of students and faculty: User perceptions and implications for future research. First Monday 14, 12. S EYEDARABI , F. 2006. The missing link how search engines can support the informational needs of teachers. eLearn Mag. S I , L. AND C ALLAN, J. 2005. Modeling search engine effectiveness for federated search. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’05). ACM, New York, 83–90. S IGURBJORNSSON, B., K AMPS, J., AND DE R IJKE , M. 2005. Blueprint of a crosslingual web retrieval collection. In Proceedings of the 5th Dutch-Belgian Information Retrieval Workshop. R. van Zwol Ed. Utrecht University, Center for Content and Knowledge Engineering.

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

A Specialized Search Assistant for Learning Objects

21:29

S K A˚ R , L. A., H EIBERG, T., AND K ONGSLI , V. 2003. Reuse learning objects through LOM and XML. In Companion of the 18th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’03). ACM, New York, 78–79. S LOEP, P. B. 2004. Reuse, Portability and Interoperability of Learning Content. Routledge, New York, NY, Chapter 10. S OSTERIC, M. AND H ESEMEIER , S. 2004. A First Step Towards a Theory of Learning Objects. Routledge, New York, NY, Chapter 2. S PINK , A., J ANSEN, B. J., B LAKELY, C., AND K OSHMAN, S. 2006. A study of results overlap and uniqueness among major web search engines. Inf. Proc. Manage. 42, 5, 1379–1391. T EEVAN, J. 2008. How people recall, recognize, and reuse search results. ACM Trans. Inf. Syst. 26, 4, 1–27. T EEVAN, J., M ORRIS, M. R., AND B USH , S. 2009. Discovering and using groups to improve personalized search. In Proceedings of the 2nd ACM International Conference on Web Search and Data Mining (WSDM’09). ACM, New York, 15–24. T HOMPSON, C., S MARR , J., N GUYEN, H., AND M ANNING, C. D. 2003. Finding educational resources on the web: Exploiting automatic extraction of metadata. In Proceedings of ECML Workshop on Adaptive Text Extraction and Mining. WALRAVEN, A., B RAND -G RUWEL , S., AND B OSHUIZEN, H. P. 2009. How students evaluate information and sources when searching the world wide web for information. Comput. Educat. 52, 1, 234–246. W ELLER , M., P EGLER , C., AND M ASON, R. 2003. Putting the pieces together: What working with learning objects means for the educator. Elearn International, Edinburgh. W HITE , R. W. AND M ORRIS, D. 2007. Investigating the querying and browsing behavior of advanced search engine users. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’07). ACM Press, New York, 255–262. W HITE , R. W., R ICHARDSON, M., B ILENKO, M., AND H EATH , A. P. 2008. Enhancing web search by promoting multiple search engine use. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’08). ACM, New York, 43–50. W IESELER , W. 1999. Rio: A standards-based approach for reusable information objects. Cisco Systems. http://www.cisco.com/warp/public/779/ibs/solutions/publishing/whitepapers/ (accessed 5/00). W ILEY, D. A. 1999. The post-lego learning object. http://wiley.ed.usu.edu/docs/post-lego/ (accessed 4/03). W ILEY, D. A., Ed. 2001. Instructional Use of Learning Objects. Agency for Instructional Technology. X U, Y. AND M EASE , D. 2009. Evaluating web search using task completion time. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’09). ACM, New York, 676–677. Received March 2010; revised October 2010, April 2011; accepted May 2011

ACM Transactions on the Web, Vol. 5, No. 4, Article 21, Publication date: October 2011.

Suggest Documents