Publication footprints and pitfalls of bibliometry - Wiley Online Library

2 downloads 0 Views 107KB Size Report
Publication footprints and pitfalls of bibliometry. Correspondence. H. Flaatten, Department of Clinical Medicine, University of Bergen,. Bergen, Norway.
EDITORIAL

Publication footprints and pitfalls of bibliometry Correspondence H. Flaatten, Department of Clinical Medicine, University of Bergen, Bergen, Norway E-mail [email protected] Conflicts of interest All authors declares no conflict of interest. doi: 10.1111/aas.12655

There are many parties, or stake-holders, involved in biomedical research. These include ‘funders’ (such as governmental organs, research foundations), ‘regulators’ (research grant agencies, policy-makers), medical journals (journal editors, reviewers, authors) and health care consumers (patients and others). All want to know if the research that is being planned, or being performed, or being published, is reliable and important. If a new scientific report presents meaningful and trustworthy knowledge, or rather, how meaningful the knowledge may be. Also, there is intense competition for research support, and among medical journals, additionally for a place for manuscript publication in a research journal, and finally for the reader’s time, as well as the attention of the general public. How can one know how important a published research finding (article) is in comparison to everything else that is published? This is a very complex issue, and there is no simple answer. From a medical journal’s perspective, or as authors of a scientific paper, the ultimate goal is to improve patient care. This is achieved indirectly through communication of meaningful new scientific knowledge. A clinically oriented medical journal, such as Acta Anaesthesiologica Scandinavica, aims to improve patient care within our specialty. This is accomplished in part through contact with individual reader/ practitioners, or through contributions to the general knowledge base, including pointing out fields that we do not know, but should seek to understand through future research. Medical journals receive a rating based on indirect measures of the ‘importance’ of their collective articles. A common way to do this is by measuring the way articles are cited by other articles, which is a part of bibliometry. Bibliometry

is the science devoted to the development and use of analytic methods to study literature and authorship using purely statistical criteria. As such, no actual assessment of the content is involved at all. There are several widely recognized bibliometric methods, and they serve different purposes, particularly for research stakeholders. These bibliometric indices are important for publishers/journals and for the individual authors. Probably the most known and used bibliometric measure is the journal impact factor (IF).1 By convention, this factor is calculated for a specific year as the number of citations that year of the articles published during the previous 2 years in that journal divided by the number of published papers (so-called source items) in the same period in that journal. If journals have lots of citations, and relatively few citable articles, then the impact factor can be high. If journals have relatively few citations but a large number of citable articles, then their impact factor will be low. Clearly, the IF leads to ranking of journals, and those with high IF are usually perceived as the most attractive or strongest ones. Authors aim to publish in the journal with the highest impact factor that will accept their work, and grant agencies may take into account in which journals the applicant has published to rate the researcher’s track record. A major shortcoming with IF is that the factor can be misleading in various ways. In fact, it is the journal that receives an impact factor, and not an individual article. Each journal publishes the best research from among the material it receives as submissions. Most journal editors (personal communication within our field) follow journal impact factors with concerned attention, though must make editorial decisions based on the scientific merits of individual article contributions, one at a time. Still, citations increase for a journal when an author cites other work from the journal. Authors are not unaware of this. Journals can increase citations by publishing articles that focus on earlier publications in their journal (reviewing own publications is one way) and it can be decided to publish a higher number of papers that are not counted as source items, such as editorials and letters to the editor. An analysis has shown that less than 25% of the published items in major journals were citable items.2

Acta Anaesthesiologica Scandinavica 60 (2016) 3–5 ª 2015 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd

3

H. FLAATTEN ET AL.

Several institutions and even countries use the journal IF to “reward” individual authors. In Scandinavia, an official system is used that allocate publication “points” to the authors of scientific paper.3 More points are given if the journal has a high IF. Or, universities can suggest to their researchers to only submit to journals with impact factors over a certain limit. Is this a practical way to get new and innovative scientific questions and findings published? The main issue, which occurs when regulators and university leaders focus on bibliometrics in order to establish credibility for researchers, is that the pursuit of higher impact factors tend to discourage new and innovative research questions and findings, since it is often more difficult for these to get accepted in scientific journals. It is also very difficult to understand what impact factors have to do with advancing patient care (the mission of our journals). Further, the pursuit of higher impact factors means that researchers are able to commit less time to other very important aspects of their professions, such as leadership, teaching, peer-review, facilitating the research of others, patient advocacy, and other things. Institutional focus on impact factors can lead to a ranking of authors, which also can be misleading. Although the average number of citations for a journal may be high, for example assume 10 (in 2 years), this does not mean that every paper is cited 10 times. Usually there is a small portion of the papers are cited very frequently, and a “tail” of many papers are cited much less frequently, or maybe never. From this point of view, there should be other bibliometric measures to use for specific authors. The crude number of citations for a particular author is not difficult to retrieve. Still, a couple of very highly cited papers may increase the mean value for an author, even if the rest of their portfolio is seldom cited. In order to overcome this problem, a relatively new bibliometric value was introduced in 2005, the h-index.4 To quote the original publication: “A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np – h) papers have ≤ h citations each.” Hence, if an author has an h-index of 29, she has 29 papers with more than 29 citations (Figure 1). The higher h-index, the more ‘impact’ the author has. The immediate advantage is that one or two highly

Figure 1. An actual researcher0 s curve of number of citations vs. paper number, with papers numbered in the order of decreasing citations. The h-index is 29.

cited publications do not influence the index significantly, and several low cited publications (in a large portfolio) will not affect the number. In our field (anaesthesiology), it has been shown that the h-index can be a sensitive indicator of academic activity.5 However, the h-index favours researchers with a long (above 10 years) publication record, and younger researchers will not easily achieve a high h-index in a short time. It is also important to be aware of that also the h-index varies according to the input: number of citations per paper. Hence, the index differs using different databases, and is often higher using Google Scholar (scholar.google.com) than other bibliometric databases. Another issue that common bibliometric measures do not shed light on is the implications of the placement of an individual author within the list of authors. Usually we assume the first name to be the main author responsible for a paper, and the last one might be the senior author or possibly the supervisor of the work. Frequently, there are many authors of a paper and hence several between the first and the last. Their contribution to the work is less easy to understand, but may have been decisive for the work. Traditionally, it may have been more meritorious to be the first or last author. At present, no widely used index system takes this into account, so such data must be retrieved manually. Acta Anaesthesiologica Scandinavica 60 (2016) 3–5

4

ª 2015 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd

PUBLICATION FOOTPRINTS

It is apparent that the evaluation of a specific article or the publication merits of a specific author must contain more than one bibliometric value in order to present a fair appraisal. A multimodal approach is advocated, where the number and profile of papers, the number of citations, the h-index and the frequency of first and last authorship should be the minimum bibliometric data used. References 1. Kumar V, Upadhyay S, Medhi B. Impact of the impact factor in biomedical research: its use and misuse. Singapore Med J 2009; 50: 752–5. 2. McVeigh ME, Mann SJ. The journal impact factor denominator: defining citable (counted) items. JAMA 2009; 302: 1107–9. 3. Aagaard K, Bloch C, Schneider JW, Henriksen D, Ryan TK, Lauridsen PS. Evaluation of the Norwegian Publication Indicator – English Summary January 2014. Authors: Danish Centre for Studies in

Research and Research Policy, Department of Political Science and Government, Aarhus University. Available at: http://www.uhr.no/ documents/Evaluation_of_the_Norwegian_ Publication_Indicator___English_Summary.pdf (accessed 1 October 2015). 4. Hirsch JE. An index to quantify an individual’s scientific research output. PNAS 2005; 102: 16569– 72. 5. Pagel PS, Hudetz JA. H-index is a sensitive indicator of academic activity in highly productive anaesthesiologists: results of a bibliometric analysis. Acta Anaesthesiol Scand 2011; 55: 1085–9.

H. Flaatten1, L. S. Rasmussen2 and M. Haney3 Department of Clinical Medicine, University of Bergen, Bergen, Norway 2 Department of Clinical Medicine, Rigshospitalet, Copenhagen, Denmark 3 Anesthesiology and Intensive Care Medicine, Ume a University, Ume a, Sweden 1

Acta Anaesthesiologica Scandinavica 60 (2016) 3–5 ª 2015 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd

5