Understanding Reference Transactions

2 downloads 0 Views 380KB Size Report
Crowley[1], Charles Bunge[2], and Thom Childers[1] (the latter two of whose articles have ... Kurt Joseph[3] or John Richardson and Rex Reyes[4]. Still, the ...
Half Right Reference is Wrong: Some New and Important Findings From A Major Study of 12 Public Libraries in California

Submitted Solely To: Mr. John N. Berry III, Editor-in-Chief Library Journal 245 West 17th Street New York, NY 10011 Internet: [email protected] Telephone: (212) 463-6822

Submitted by: Dr. John V. Richardson Jr., Professor, UCLA and Presidential Scholar, LSSI LLC Internet: [email protected] or [email protected] Telephone: (310) 206-9369

Graduate School of Education and Information Studies Department of Information Studies GSE&IS Bldg, Suite 204, CB 951520 Los Angeles, CA 90095-1520

Half Right is Wrong -- 2

What is good reference service? What are the desirable outcomes? Is the quality of reference service most dependent on the library, the librarian, or the user? These are questions that have been asked by reference librarians for at least three decades, now—or, even longer if you count Edith Guerrier’s “The Measurement of Reference Service,” which appeared in the July 1936 issue of Library Journal. Starting in the late 1960s, reference researchers, like the late Terry Crowley[1], Charles Bunge[2], and Thom Childers[1] (the latter two of whose articles have appeared in these very pages), have argued about the answers to these questions. Though they have most often defined quality reference service in one of three ways—accuracy, utility, or user satisfaction—a lot of other theories have been suggested. Herbert Goldhor’s different theory, for example, (“Performance = Accuracy = Staff Ability + Library Collection”) reflects the thoughts of many writers on this topic. Of course, Bunge might have preferred to say “Performance = Efficiency = Accuracy/Time = Staff Ability + Library Collection” or Crowley would state “Performance = Accuracy = Library Collection + Staff Ability = Budget.” Overall, however, the point remains that in such prior studies of reference service, accuracy has usually meant some number of judges scoring ten or twenty so-called typical questions on a scale ranging from “completely answered” to “not answered at all.” The results of these

Half Right is Wrong -- 3 studies, unfortunately, are all too familiar: Half right reference service. In a related vein, a lot of work has also gone into the scoring of questions. See, for example, the coauthored work by Cheryl Elzy; Alan Nourie; F. W. Lancaster; and Kurt Joseph[3] or John Richardson and Rex Reyes[4]. Still, the results come up the same: Half right reference service. On the other hand, utility and user satisfaction have been measured by exit surveys as the user leaves the library—also with now-familiar results—in which users found at least something they wanted, so they expressed very high satisfaction with the outcome. Consequently, these results seem contradictory to one another. How can we have half-right reference service, yet high utility and satisfaction findings? If you are really fascinated with this topic, you can find more studies like these posted on the Web. Nearly 1,000 citations on various aspects of reference service are listed at http://purl.org/net/reference. If you can only read a couple items, I would then recommend just two: Kenny Crews[5] and Matthew L. Saxton[6], both of whom have written good, critical reviews of this literature. Notably, they have both found a lack of agreement on the definition of reference service; inconsistent operational definitions of both the independent and outcome variables; bias due to lack of random sampling; simplistic statistical

Half Right is Wrong -- 4 procedures; low sample sizes; little repetition of prior studies to confirm or dispute earlier findings; and a lack of attention to theory. What’s really interesting about this subject is that most prior studies have found that the se three most important outcomes (i.e., accuracy, utility, and satisfaction) appear to be almost totally unrelated, which suggests that they are driven by different underlying factors. For instance, users often indicated that they were satisfied even when they did not receive a useful response from a librarian. Users also indicated they received useful information even when the information was inaccurate. Many of us have found it difficult to understand such results—it just doesn’t make sense that our users would be so happy when they received half right reference service. It is just so counterintuitive; how could those findings be true? Indeed, many reference librarians have wanted to disagree with such research results in “Letters to the Editor” and at various conference presentations. Up until now, however, it’s been hard to argue with that research without some solid alternative evidence. Throughout my own research on this topic since the late 1980s, I can personally recall having had several conversations with Terry Crowley, who wanted to believe that this contradictory situation was a methodological artifact and who also maintained that someone was going to resolve this apparent conflict some day. He even wrote an RQ

Half Right is Wrong -- 5 article in 1985, in which he asked whether or not half-right reference service was true[7]. It’s not—but now we know why. Although Crowley didn’t live long enough to learn all the details, it turns out that so-called typical "fact-type" queries used in all of the previous accuracy studies were only representative of half of all queries received. In Saxton’s and my new coauthored study of twelve different public libraries in southern California (i.e., the Anaheim Public Library; Azusa Public Library; Beverly Hills Public Library; El Segundo Public Library; Glendora Public Library; Orange County Public Library; Heritage Park Regional Branch, Orange County Public Library; San Juan Capistrano Public Library; Pomona Public Library; Santa Monica Public Library; South Pasadena Public Library; Torrance Public Library; and Yorba Linda Public Library), we have determined that the so-called "55% rule" has never been tested against a truly representative field sample. In 90% of the cases in this study, a panel of reference experts determined that librarians recommended an accurate source or an accurate strategy in response to a user's query. Notably, the most important factor predicting accuracy was the difficulty of the query. This finding is intuitively obvious; it makes sense. Earlier work didn’t make sense, but we didn’t know why. Now we do. The reference service performance model was overly simplistic, samples were way too small, and the test questions simply were not truly representative of real-world reference

Half Right is Wrong -- 6 questions. For the first time ever, we now have a study with a sophisticated model, one of the largest samples ever (N=9,274 persons inquiring for assistance), and questions drawn from the library users’ realm—all using the latest statistical techniques. It’s also important to know about some of the other significant findings from this study. For instance, library users are more satisfied by those librarians who actively practice the reference skills outlined in the Reference and User Services Association’s “Guidelines for Behavioral Performance of Reference and Information Services Professionals”[8], which includes inviting queries, expressing interest, listening critically, and verifying user satisfaction. One might now confidently say the model should read “User Satisfaction = Librarian Behavior [based on those RUSA guidelines]. Furthermore, these findings also suggest that the RUSA guidelines are really customer service guidelines. Finally, the probability of an individual’s finding useful and complete information is not only dependent on the librarian's reference skills, but is also predicted by the user's familiarity with the library and his or her level of education. The model might now read: Usefulness [utility] = User's Familiarity + User's Education + Librarian’s Behavior [following the RUSA guidelines]. Perhaps these latter findings are not surprising, but they do suggest the need for some concrete actions. The reference profession should also stop kicking itself.

Half Right is Wrong -- 7 We are doing reference work much better than what we thought. We might make S. S. Green proud, yet! As for those who may want more details about this study, the preceding analysis is based on a random sample of over 3,500 actual reference queries posed by users at twelve public libraries in southern California. These queries can be classified as 50% ready reference, 10% research questions, and 40% frequently asked questions. Of the remaining 5,754 queries, 37% were directional, 10% were internal referrals, and another 15% were unrecorded approaches. A high rate of return (67.4%) of user surveys reduces the risk of selfselection bias, ensuring that these utility (e.g., “Usefulness [utility] = User's Familiarity + User's Education + Librarian’s Behavior” [i.e., the RUSA guidelines] and satisfaction “User Satisfaction = Librarian Behavior [based on those RUSA guidelines] findings mentioned above are both reliable and valid. Readers wishing to read about this topic in greater detail should take a look at Matthew L. Saxton’s and my recently published and coauthored book, Understanding Reference Transactions: Transforming an Art into a Science (New York: Academic Press, April 2002). What readers should take away from this new work is that reference staff development should focus on the principles outlined in the RUSA Behavioral Guidelines at the following website: http://www.ala.org/rusa/stnd_behavior.html. Provocatively, I would also like to

Half Right is Wrong -- 8 suggest that more utility measures for the assessment of reference be used in the future, particularly if one is trying to distinguish between good and poor service. Finally, students of reference service should learn about the existence of multiple performance outcomes (i.e., accuracy, utility, and satisfaction) and to recognize that each outcome is driven by different factors. As we all know, many computer users make use of brute force search engines without the help of reference librarians. This change in user behavior is called “disintermediation.” So, most importantly, we need to view reference service as the same interpersonal process as envisioned by Samuel Swett Green[9], —and especially as we move toward Web-based “24/7” virtual reference enterprises. REFERENCES 1.

Childers, Thomas A. "Telephone Information Service in Public Libraries: A Comparison of Performance and the Descriptive Statistics Collected by the State of New Jersey." In Information Service in Public Libraries: Two Studies, edited by Terence Crowley and Thomas Childers. Metuchen, N.J.: Scarecrow Press, 1971.

2.

Bunge, Charles A. "Professional Education and Reference Efficiency." Ph.D. diss., University of Illinois, 1967.

Half Right is Wrong -- 9

3.

Elzy, Cheryl; Nourie, Alan; Lancaster, F. W.; and Joseph, Kurt M. "Evaluating Reference Service in a Large Academic Library." College and Research Libraries 52 (1991): 454-465.

4.

Richardson, John V., and Reyes, Rex. "Expert Systems for Government Information: A Quantitative Evaluation." College and Research Libraries 56 (May 1995): 235-247.

5.

Crews, Kenneth D. "The Accuracy of Reference Services: Variables for Research and Implementation." Library & Information Science Research 10 (1988): 331355.

6.

Saxton, Matthew L. "Reference Service Evaluation and Meta-analysis: Findings and Methodological Issues." Library Quarterly 67 (July 1997): 267-289.

7.

Crowley, Terence. "Half-right Reference: Is It True?" RQ 25 (1985): 59-68.

Half Right is Wrong -- 10 8.

American Library Association. Reference and Adult Users Division (RASD). Ad Hoc Committee on Behavioral Guidelines for Reference and Information Services. "Guidelines for Behavioral Performance of Reference and Information Services Professionals." RQ 36 (Winter 1996): 200-203.

9.

Richardson, John V. "Samuel S. Green (1938-1918." American National Biography 9 (1999): 507.