Collective decisions - Wiley Online Library

4 downloads 1644 Views 62KB Size Report
2003) highlighted the domain of Marketing/. Promotion ... third article, deals with a domain that appears, on ... purchase of a 'bundle' of electronic journals for.
Using research in practice Blackwell Publishing Ltd.

Collective decisions

a systematic review to look at journal selection criteria’.

Andrew Booth Although this column opportunistically reviews evidence as it comes to light, it is somewhat apposite that the first three articles in the regular ‘Using Research In Practice’ series each focus on a different one of the ‘Six Domains of EBL’ previously identified in this journal by Crumley & Koufogiannakis.1 So the first column (March 2003) highlighted the domain of Marketing/ Promotion and looked at use of questionnaires, while the second (June 2003) dealt with Information Access and Retrieval and examined the comprehensiveness of literature searches. This, our third article, deals with a domain that appears, on first viewing at least, to be exclusively a concern of librarians, that is Collection Development. In actuality, our keen interest in journal quality is shared by the evidence-based practice community with its concerns with the validity, applicability and reliability of articles placed within those journals. As in previous columns we start with a scenario: You are part of a Working Group looking at purchase of a ‘bundle’ of electronic journals for your local health community. The Working Group includes various stakeholders including managers, educators, clinicians and doctors in training. In discussing your community’s requirements, a representative of the Postgraduate Deanery queries the process by which journal titles are selected for local libraries. She asks: ‘What is the best way of judging whether a journal is of high quality and should be purchased from the core budget?’ There then follows an extensive debate during which the reliability of such factors as circulation figures, peer review, impact factor and presence on a core list are debated. ‘Alas!’, you muse, ‘if only someone had conducted

While the prospect of some deus ex Medline may seem remote, relief is at hand. Recent preoccupations with the peer review process are reflected in a special issue of JAMA where several apparent indicators of quality are analysed.2 You decide to take a closer look at the results.

About the study The authors’ hypothesis is that peer-review and bibliometric methods (such as impact factors, circulation figures, indexing on  or presence on a core list) may be useful in evaluating the quality of a journal. These methods are said to be controversial because of biases in citation, impact factor and what are described as ‘inherent limitations of the sources of information used to calculate them’.2 The authors claim that none of these bibliometric parameters have been validated in connection with journal quality. They use research article methodological quality as a proxy measure for journal quality. The authors use a computer-generated list of random numbers to randomly select 30 journals from 107 general internal medical journals classified as such by the Institute for Scientific Information. They excluded journals that were not in English or were unavailable through the University of California library system. Original research articles were identified by searching  and  from January to December 1999. They excluded a large number of publication types, focusing instead on randomised controlled trials and on other non-RCT empirical studies. They extracted data on seven factors thought to have a possible bearing on journal quality: 1 Peer review. 2 Citation rate.

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.185 –188

185

186

Using research in practice

3 4 5 6 7

Impact factor. Circulation. Manuscript acceptance rate. Indexed on . Listed in the Brandon/Hill Library list.

Is peer review an indicator of quality? Unfortunately this particular research study, though purporting to examine the importance of peer review, is unable to do so. Why? Because of the methods by which they selected their sample. They selected journals for inclusion from an issue of Journal Citation Reports. Then, as already mentioned, they restricted the journal articles selected to those identified by searching  and . Although all such journals are not necessarily peer-reviewed, selection of  journals attaches such weight on peer review that it is not surprising to find that all journals in the authors’ selection are, without exception, peer reviewed. This is similarly true of being indexed on . Clearly, you cannot analyse the importance of a factor as a variable if there are no exceptions to this factor within the group you are analysing. This situation is exacerbated if you look at the two exclusion criteria. We know that  journals are more likely to be in English. Excluding journals in languages other than English increases the likelihood that the remaining journals are indexed in . As mentioned above, they are thus more likely to be peer reviewed. Exclusion of journals that are unavailable within the University of California library system is even more problematic. What factors will likely have determined whether journals were originally selected for the library—back in the mists of time some librarian probably considered ‘Is it peerreviewed?’ and ‘Is it indexed in ?’! So what can we conclude? That journals listed in Journal Citation Reports are likely to be indexed on  and are also likely to be peerreviewed! That journals selected for the University of California library system are also likely to be peer-reviewed and indexed in ! Not exactly rocket-science—in fact quite the reverse— an apparent methodological flaw that ironically appears in an article that takes issue with the

limitations and biases of bibliometric methods. Surely it would have been more meaningful to have sampled from Ulrich’s International Periodicals Directory (already on hand as a source for circulation figure data) where neither being peerreviewed nor being indexed in  necessarily determines inclusion. Is impact factor a predictor of quality? Within many academic units, journal impact factor has assumed an almost mythical importance because of its past association with the Research Assessment Exercise, that quinquennial review of academic output from UK Universities. Although criteria for judging research output have broadened beyond this much debated and almost absurdly authoritative measure, it still holds sway among factors that determine where a researcher will target a proposed article. Does it have a bearing on journal quality? One will resist the temptation to comment further on the fact that the journal sample was selected not from some value-free listing but rather from an issue of Journal Citation Reports published by the ISI. The authors were able to observe a ‘significant association’ between journal quality score (using an instrument that they had developed) and both impact factor (P < 0.001) and citation rate (P < 0.001). In addition, when controlling for RCT status, citation rate was the factor with the smallest P-value (P < 0.001). The authors conclude that articles of higher methodological quality are published in journals with a higher citation rate and impact factor. However they do not comment on whether the higher citation rate and impact factor may themselves be attributable to the fact that the articles are of higher methodological quality. Would you not be more likely to read and then cite a high quality paper than a lower quality one? Are circulation figures related to quality? The authors observed a circulation range of between 1080 copies and 3.7 million copies in the journals they studied and found that a higher circulation figure was associated with higher methodological quality (P = 0.001). So

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.185–188

Using research in practice

higher quality journals are more widely read! While such an observation might not necessarily pertain among such literary outputs as national newspapers, where objectives other than edification are frequently involved, one would be surprised if clinicians chose to read journals in which they could not place professional confidence. Consider the two main purchasers of medical journals —individual subscribers and libraries. Individual subscribers are unlikely to spend money from their own pockets on a journal of low perceived quality. Even more clearly, libraries frequently feel duty bound to purchase only those journals that cross some threshold of ‘quality’. Is being on a core list of journals an indicator of methodological quality? Core lists of journals, such as the Core Collection of Medical Books and Journals3 in the UK and the well-known Brandon-Hill lists4 in the US, have long persisted as external arbiters of journal quality. Ironically, as Eldredge recognizes, core lists also have a long pedigree of being challenged as to their validity.5 Eldredge describes how, in 1946, William D. Postell began the first known cohort study6 to question the practice of using recommended books and titles from ‘authoritative lists’ to guide selection decisions. Postell used a cohort design to determine how journals ranked highly on a list of recommended journals compared with journals actually used by his clientele. This particular paper2 found that indexing on the Brandon/Hill Library list was significantly associated (P < 0.001) with articles with higher quality scores. Again, we might be concerned that inclusion in a core list might be determined by some of the factors already examined separately such as whether the journal is peer reviewed, whether it is included in , and its circulation figures.

Implications for practice From a brief appraisal we have concluded that, although the authors affirm that ‘articles of higher quality are published in journals whose articles are cited more frequently, read more widely and scrutinized more carefully’, they have likely

overlooked the fact that many of the measures they use display a complex interdependence. So peer review and inclusion in  may have a bearing, not only on the inclusion and exclusion criteria themselves (availability in the library and likelihood of being in English), but also on inclusion in a core list and possibly on the magnitude of citation and impact factors. In a recent editorial, Plutchak challenges ‘evidencebased collection development’, that is ‘the notion that one should do those things that have been proved to work in similar situations under scientifically valid conditions, rather than relying on convention, common practice, or some sort of intuitive, educated guess about what the best thing might be’.7 Significantly, he does not take issue so much with the quality of the evidence but more ‘because we do not really have a clear notion of what the problem is that we are trying to solve’. He goes on to say that: We can (and do) speak in general terms about what a good collection is ... a complex balance of budget, usage, and good, educated guesses on the part of the librarians doing the choosing. While impact factors, selected lists, and other tools may be useful guides, they cannot be mechanically applied to defining what a good collection is.7 At this point it is worth taking stock of exactly what we might want to achieve by practising evidence-based information practice. Would the cause of our professional standing be furthered had we been able to reduce the process of journal selection to the simplicity of a mechanistic formula? Is it not more important to demonstrate the complexity of our professional decision making by making others aware of all the factors that may have a bearing on our final journal selection? In fact, we can use papers such as this one to reaffirm a definition of evidence-based librarianship that involves the complex interplay of research-derived, practitioner-observed and user-reported factors.8 Every librarian should learn to critique articles such as this one2 so that the reality of decision-making informed by multiple factors such as peer review, database coverage and circulation figures is not hidden by the impression that these seven factors all operate

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.185 –188

187

188

Using research in practice

independently of each other. So, although we might take issue with Plutchak’s implied rebuttal of evidence-based practice, we can certainly agree that:

important of all, know the needs of the people who use it.7

It is tempting, because we like to have things we can measure, to put our faith in impact factors or selected lists or other seemingly objective measures. ... The evidence-based movement pushes us in the direction of measurable goals and objectives. We need to be careful, however, not to confuse measurement with efficacy.7

1 Crumley, E. & Koufogiannakis, D. Developing evidencebased librarianship: practical steps for implementation. Health Information and Libraries Journal 2002, 19, 61–70. 2 Lee, K. P., Schotland, M., Bacchetti, P. & Bero, L. A. Association of Journal Quality Indicators with Methodological Quality of Clinical Research Articles. JAMA Peer Review Congress IV 2002, 287, 2805–8. 3 Hague, H. Comp. Core Collection of Medical Books and Journals 2001, 4th edn. London: Medical Information Working Party, 2000. 4 Hill, D. R. & Stickell, H. N. Brandon/Hill selected list of print books and journals for the small medical library. Bulletin of the Medical Library Association, 2001, 89, 131–53. 5 Eldredge, J. D. SCC milestone in EBL history. South Central Connection (MLA/SCC Chapter Newsletter) 2003, 13, 2, 10, 14. 6 Postell, W. D. Further comments on the mathematical analysis of evaluating scientific journals. Bulletin of the Medical Library Association 1946, 34, 107–9. 7 Plutchak, T. S. The art and science of making choices. Journal of the Medical Library Association 2003, 91, 1–3. 8 Booth, A. Exceeding Expectations: Achieving Professional Excellence by Getting Research Into Practice, LIANZA 2000, Christchurch, New Zealand (October 15th-18th). Available from: http://www.shef.ac.uk/ ∼scharr/eblib/Exceed.pdf (accessed 23 May 2003).

However Plutchak’s subsequent and concluding statement that ‘Collection development, as with so much else in librarianship (and medicine, for that matter), remains more of an art than a science’7 comes dangerously close to the appeals to magic or mysticism by which an elite traditionally defend their claim to specialist expertise. Instead, when faced with challenges such as that from the Working Party of our scenario, let us champion our profession as ‘evidence informed’ rather than ‘evidence based’ for as Plutchak reminds us: Better data and better definitions will certainly help us, but, in the long run, we will continue to rely on the educated judgement of professional librarians who know the literature and, most

References

© Health Libraries Group 2003 Health Information and Libraries Journal, 20, pp.185–188

Suggest Documents