Document not found! Please try again

Measuring experienced utility of information ... - Wiley Online Library

7 downloads 1172 Views 79KB Size Report
of the information systems' design and pricing strategies; help to better understand user's information behavior, including selection of the sources to answer ...
Measuring Experienced Experimental Approach

Utility

of

Information

Retrieval

Systems:

Irene Lopatovska University of North Texas P.O. Box 311068 Denton, Texas 76203-3101 [email protected] Introduction This poster presents the results of an experimental research project at the University of North Texas, Information Science Ph.D. program on a grant from the Texas Center for Digital Knowledge (TxCDK). The experiment was largely based on a naïve methodology suggested by Cooper (Cooper, 1973) for applying utility in measuring effectiveness of Information Retrieval (IR) systems. The concepts of expected utility, experienced utility, instant utility, remembered utility, and predicted utility were borrowed from the fields of psychology (Schreiber & Kahneman, 2000) and behavioral economics (Kahneman, Wakker, & Sarin, 1997) and tested on an experimental system and individual systems’ outputs (e.g. abstracts, articles). The resulting analysis identifies the relationships between a) the subjective valuation of individual output and output’s contextual relevance to the participant’s information need; b) the subjective valuations of individual outputs and the valuation of the overall system; c) the subjective valuations measured immediately after systems’ uses and after a period of time. The research findings improve understanding of the systems’ value to the users, which can lead to the improvements of the information systems’ design and pricing strategies; help to better understand user’s information behavior, including selection of the sources to answer information need and formulation of perceptions regarding information systems; and benefit other disciplines by validating interdisciplinary methodology, concepts and understanding of information as a unique but measurable resource.

Background Utility is an established concept in the scientific community. It refers to a measure of user’s affective valuation that helps to explain choices made in the decision process. In information science an umbrella term relevance is often used to define concepts explaining users’ decisions to accept or reject information (Schamber, 1994), while utility is usually defined as usefulness of the information or search results (Su, 1998; Regazzi, 1988). Cooper (1973) defined utility as whatever the user finds to be of value about the system output: usefulness, entertainment or aesthetic value. Cooper’s concept of utility was never applied in information sciences for measuring IR systems and their outcomes. However, utility as affective valuation measure was extensively tested in other social sciences (Kahneman, Wakker, & Sarin, 1997). Redelmeier and Kahneman (1996) measured several types of utility involving colonoscopy procedure patients; Kahneman, Fredrickson, Schreiber, and Redelmeier (1993) measured utility involving participants undergoing trials with cold water; Schreiber and Kahneman (2000) measured utility in an experiment with harsh sounds. IR systems and their outputs are not cold water, harsh sound or an unpleasant medical procedure. Users of these systems are not confronted with physical pains or pleasures. However, in the process of satisfying information needs, users experience “pleasure” when they find some systems/outputs useful, easy to use, and otherwise satisfying, and “pains” when the systems/outputs are not satisfying. These subjective “pleasures” and “pains” involved in the information seeking process can be measured by applying utility concepts validated in several experiments.

Methods The experiment involved graduate students of the School of Library and Information Science. The participants were asked a question related to the biographical facts of a famous person and given a list of Google-like outputs with links to the sources containing potential answers to the question. While searching for the answer, participants were asked to valuate individual outputs by assigning dollar amount they would be willing to pay for

the output and by rating their feelings about the output (from extremely positive to extremely negative). Participants were asked to rate their feelings about the overall IR system performance. Every question that solicited dollar or emotional valuation was also associated with an open question “Why did you rate the output/system the way they did”, identifying factors that affected participants valuation. Basic demographic information was solicited from participants as well. Two weeks later the same participants were asked a different question regarding the live of a famous person and were asked to find answers in the same experimental system. Repeating conditions validated the measure and allowed the researcher to collect data on remembered utility. Experimental data was analyzed using Regression, Canonical Correlation Analysis, and Structural Equation Modeling techniques. A model identifying relationship between output properties and user’s valuation measures was developed.

Conclusion Application of interdisciplinary concepts of utility to information seeking situations is innovative and offers empirical methods for measuring subjective effectiveness of IR systems and their outputs and compares it to outputs’ physical properties. The implications of applying utility as a measure of the IR systems’ effectiveness include better understanding of systems’ value to the users, leading to improvements in information systems’ design and pricing strategies; understanding of user’s information behavior, including selection of the outputs to answer information need and formulation of perceptions regarding information systems; and understanding of information as a unique but measurable resource.

ACKNOWLEDGMENTS The experimental design and analysis of findings benefited from discussions with Prof. William Cooper, Prof. Brian O’Connor, Dr. Linda Schamber and Dr. William Moen. The research was partially funded by the Texas Center for Digital Knowledge (TxCDK) grant.

REFERENCES Cooper, W. (1973). On selecting a measure of retrieval effectiveness. Part I. The “subjective’ philosophy of evaluation. Journal of the American Society for Information Science, 24, 87-100. Kahneman, D., Fredrickson, B. L., Schreiber, C. A., & Redelmeier, D. A. (1993). When more pain is preferred to less: adding a better end. Psychological Science, 4, 401-405. Kahneman, D., Wakker, P.P., & Sarin, R. (1997). Back to Bentham? Explorations of experienced utility. Quarterly Journal of Economics, 112, 375-405 Redelmeier, D. & Kahneman, D. (1996). Patients' memories of painful medical treatments: real-time and retrospective evaluations of two minimally invasive procedures. Pain, 66, 3-8. Regazzi, J. J. (1988). Performance measure for information retrieval systems – an experimental approach. Journal of the American Society for Information Science, 39(4), 235-251. Schamber, L. (1994). Relevance and Information Behavior. Annual Review of Information Science and Technology (ARIST), 29, 3-48. Schreiber, C. A., & Kahneman, D. (2000). Determinants of the remembered utility of aversive sounds. Journal of Experimental Psychology: General, 129, 27-42. Su, L. T. (1998). Value of search results as a whole as the best single measure of information retrieval performance. Information Processing and Management, 34(5), 557-579.