Benchmarking international scientific excellence - Semantic Scholar

7 downloads 0 Views 75KB Size Report
Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? ROBERT J. W. TIJSSEN, MARTIJN S.
Jointly published by Akadémiai Kiadó, Budapest and Kluwer Academic Publishers, Dordrecht

Scientometrics, Vol. 54, No. 3 (2002) 381–397

Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? ROBERT J. W. TIJSSEN, MARTIJN S. VISSER, THED. N. VAN LEEUWEN Centre for Science and Technology Studies (CWTS), Leiden University, Leiden (The Netherlands) This paper introduces a citation-based ‘systems approach’ for analyzing the various institutional and cognitive dimensions of scientific excellence within national research systems. The methodology, covering several aggregate levels, focuses on the most highly cited research papers in the international journal literature. The distribution of these papers across institutions and disciplines enables objective comparisons their (possible) international-level scientific excellence. By way of example, we present key results from a recent series of analyses of the research system in the Netherlands in the mid 1990s, focussing on the performance of the universities across the various major scientific disciplines within the context of the entire system’s scientific performance. Special attention is paid to the contribution in the world’s top 1% and top 10% most highly cited research papers. The findings indicate that these high performance papers provide a useful analytical framework – both in terms of transparency, cognitive and institutional differentiation, as well as its scope for domestic and international comparisons – providing new indicators for identifying ‘world class’ scientific excellence at the aggregate level. The average citation scores of these academic ‘Centres of Scientific Excellence’ appear to be an inadequate predictor of their production of highly cited papers. However, further critical reflection and in-depth validation studies are needed to establish the true potential of this approach for science policy analyses and evaluation of research performance.

Research quality and centers of scientific excellence Scientific quality is a necessary pre-condition to push back research frontiers and open up new fields of knowledge. Achieving and maintaining scientific excellence has always been crucial for leading researchers and scholars working at the international frontiers of science. The ability to excel at that level, and to be competitive in the international arena, has also become a strategic goal and explicit target of research institutes as a whole. Nowadays, scientists and their research teams find themselves in

0138–9130/2002/US $ 15.00 Copyright © 2002 Akadémiai Kiadó, Budapest All rights reserved

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

an increasingly crowded and competitive marketplace, where even excellent proposals and high reputation may not always secure funding or tenure, often owing to a combination of underfunding and overdemand. Managers of research institutions, funding agencies and (supra)national governments all face, for different reasons and goals, the same pervasive evaluative question: how can one define, recognize and compare ‘science excellence’ as objectively as possible? Research funding agencies in particular have to identify individuals, teams and institutions they believe will make best use of their scarce funds. How can (promising) first-rate research performers be identified with an acceptable degree of certainty? And, more specifically, how useful is the past performance of emerging or established research institutions for doing so? Identification of ‘excellence’ is a matter of ex ante assessment or ex post evaluation of research performance. Clearly, such a broad and ambiguous concept is not directly measurable in a generally accepted valid manner.* To begin with, there are numerous definitions of ‘scientific prestige’, ‘elite scientists’ and ‘hierarchies of reputation’ in the sociological literature, their exact meaning depending on the school of thought, theory or methodological context.1-3 Most of these notions apply to individual researchers or socio-cognitive collectives, rather than institutional aggregates. At the level of entire research groups, departments and main institutions, the conceptual and operational problems are further compounded given the diversity of research missions, capabilities, resources, facilities and outputs characterizing research organizations and their units.4 Not surprisingly, open and fair applications of peer review evaluation may be difficult to achieve.5,6 New developments in the field of quantitative studies of science offer methods to support peer review in order to keep it objective and transparent. In this article we touch on the following broad methodological questions: Is it possible to define ‘scientific excellence’ in a comparative institutional context? If so, can one develop a valid metrics measuring quality-related features of research institutions? And are these measurements and indicators suitable for broad cross-country and crossdiscipline ‘benchmarking’ comparisons at the international level? We will explore the analytical potential of advanced quantitative ‘bibliometric’ indicators to help describe and diagnose scientific excellence at the institutional level on a routine basis. In this study we narrow our scope on knowledge creation and transfer processes related to basic research. Our prime operational objective is to explore the feasibility of citation impact measures for identifying and tracking ‘Centres of Scientific Excellence’ (CoSEs) within the public sector of domestic research systems. *

We will use the term “excellence” in a dual sense: as a comparative expression to denote superiority to others, and in the sense of being of high quality.

382

Scientometrics 54 (2002)

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

Validation studies: revealing the many faces of scientific excellence Citation-based validation studies Obviously, gaining attention and recognition from colleagues and competitors for scientific accomplishments is an important step in establishing a solid reputation of scientific excellence. An obvious analytical step therefore is to track the intellectual influence of tangible research outputs on the external world in terms of their contributions to scientific progress and pushing back international research frontiers. More in particular their ‘impact’ within the relevant user communities, especially their international peer groups of researchers. Explicit references and acknowledgements (‘citations’) to a researcher’s papers within other scientific publications written by their fellow researchers can be used as measures of these external impacts on their scientific environments. Clearly, reaching a top ranking in the global citation impact league within a discipline is without doubt a decisive piece of factual information concerning the scientific status and worldwide impact of a research paper and its associated researchers and institutions. Research publications receiving relatively large quantities citations indicate significant scientific ‘impact’, a necessary condition for a researcher and the affiliated institution to attain broad visibility and success in terms of scientific recognition. So far, only a few studies have attempted to systematically assess the general validity of these citation-based measurements of excellence. One of the most telling demonstrations of the importance of very high citation levels was a series of papers on the citation-based performance of Nobel laureates.7,8 These Nobelists are often ‘citation superstars’, receiving an order of magnitude more citations than other scientists in the same field.9 Narin10 was the first to review the state-of-the-art of citation analysis techniques for evaluating the performance of scientific institutions. His results supported the idea that relatively high citation levels correlate with positive peer opinions of the importance of scientific papers, with peer rankings of research institutions, and more importantly with other independent indicators of scientific quality of research papers. In the wake of these pioneering studies, citation impact measures have been applied with increasing frequency in the 1980s and 1990s at the institutional level for evaluating and ranking the research performance of university departments and other research organizations. However, there exist many reasons for one research article citing the other, not all of which are directly related to the scientific quality of the cited work or the contributing researchers and institutions.11 Many critics of citation analysis have therefore objected

Scientometrics 54 (2002)

383

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

to their use as surrogates for actual scientific communication processes and derivative measures of scientific repute.12-16 The usage of citation measures for quantitative assessments of research visibility and external impact, let alone scientific quality and excellence, is therefore still controversial – especially amongst the advocates of the purely ‘qualitative’ peer review-based evaluation methodologies.17 Peer review evaluations Encouraged by the success of previous national evaluation studies of university research in the Netherlands in the 1980s and early 1990s, the Association of Universities in the Netherlands (VSNU) adopted an ongoing 5-year cycle of external evaluations of all major academic disciplines using international peer committees. These assessments focus on the level of research departments or major research programs as a unit of analysis, rather than the individual (chief) scientist or subunits. The assessment is based on an official government-endorsed protocol stipulating that each evaluation is to cover five explicit criteria of the research performance and capacity, including the overall scientific quality. The publication output in peer reviewed international journals is a key element in assessing the scientific quality. Each criterion is scored by the expert panel on a 5-point scale thus providing composite measures of what one might refer to as ‘international scientific standing’. Those that are fortunate enough to receive performance scores of 4 or higher are generally considered to be excellent research departments or research programs at the international level. These VSNU quality scores will be used as a partial validation of citation impact scores later on in this article (see Table 3). Citations are of course basically a postfactum peer review of science in a quantitative manner. The citing colleagues and other users of scientific knowledge vote through their own writings by citing those researchers whose publications and work they consider relevant or essential to their own. The recent study by Rinia et al. compared citation impact scores and peer review-based evaluations of physics research groups in the Netherlands – the outcome shows mixed results, including some very significant positive correlations, but most importantly the inability of peers to discriminate between the most highly cited groups and the sub-toppers as defined by citation scores.18 Survey amongst the most highly cited authors in the Netherlands The current analytical value of citation-based indicators of scientific excellence leaves many questions unanswered, especially at the micro level. In our validation study

384

Scientometrics 54 (2002)

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

we sought out insider information from the authors of Holland’s most highly cited research papers in the 1990s. A small-scale mail questionnaire was distributed in October 2000 amongst the Dutch author(s) of the top 3 most cited research papers in ISI-indexed journals. We looked at 35 disciplines, covering all fields of science except for law, and the arts and humanities. The postal survey comprised of a total of 79 standardized questionnaires – one for each paper – consisting of semi-structured questions, 68 of which were returned with partial or complete responses (86% response rate).* In almost half of the cases (43%) their highly cited papers dealt with the introduction of a new concept, idea, theory, method, or model. A further 25% concerns novel applications of existing theories, concepts, models, etc. Review articles comprise of 18% of the highly cited papers. Finally, some 4% of the papers achieve their citation status by publishing new experimental or observational data for common use. Hence, although each type of paper adds to the scientific knowledge reservoir, only two-thirds (68%) of the high performance papers should probably be considered as markers of ‘true’ scientific advance in the traditional sense of original research leading to new insights, approaches or applications. How do these highly cited authors define research excellence in broad terms? Their answers show a preference for ‘internal’ epistemological criteria related to the intrinsic scientific quality of research: 55% vote for the broad category ‘Novelty, originality, and methodological rigor’ and another 23% opt for ‘Advances scientific progress within the research field’. However, a significant minority of 22% prefer ‘external’ criteria, either in terms of the sociological characterization ‘Explicit visibility and recognition among colleagues’ (12%), or the instrumental value of research described as ‘Usefulness, broad application, or impact, of findings outside the research field’ (10%). Interestingly, high citation scores do not always coincide with the authors’ views of their papers scientific quality or relevance – 10 authors (15%) did not perceive the research qualify of their papers to be of international ‘world class’ level. In addition, although the majority of the respondents do see a (very) strong positive relationship (59%) between the paper’s level of citedness and its scientific quality, a third (36%) indicate that the scientific quality of highly cited papers tends to vary significantly within their own research field. These views seem to be shared across all disciplines: we

*

A paper was pre-selected for inclusion if it belonged to the three most highly cited papers in time-interval 1994-1998 which listed at least one author with an affiliate address in the Netherlands (author self-citations excluded). Papers published prior to 1980 were not included in the analysis. The final selection included those papers that received at least 15 citations during those 5 years, and where at least one Dutch-language author with a current mail address could be tracked down.

Scientometrics 54 (2002)

385

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

find no significant statistical relationship across the main scientific fields of the various respondents (Pearson chi-square value = 6.3, p1.5 and HCP1%>1.5). Pending validation it would seem reasonable to regard these cases as possible ‘world class’ CoSEs. The volume of highly cited papers provides a rough estimate of their size, while their HCP scores are more likely to reflect the level of international excellence. The key question of course is: are these selected few indeed world class Centres of Excellence? As a first step in a validation process, we compared our scores with two other sources of empirical data reflecting their acknowledged excellence in the academic community: (1) the scientific quality of the international journals in which their research papers were published, and (2) peer review scores by international expert panels.

392

Scientometrics 54 (2002)

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

Table 3. Centres of Scientific Excellence in universities in the Netherlands HCP10% >1.5 and HCP1% > 1.5, 1990-1994 research papers with 5-year citation windowsa University and discipline KUN-Multidisciplinary KUN-Instruments and instrum. LEI-Biology LEI-Materials sciences RUG-Physics RUG-Biology RUG-Food science TUE-Chemistry UT-Chemistry UT-Chemical Engineering UT-Mechanical engineering UU-Environmental sciences UvA-Biomedical sciences UvA-Multidisciplinary VU-Physics VU-Biology WUR-Agriculture WUR-Chemistry WUR-Clinical medicine

Papers inb top 10% top 1% 12 7 61 22 143 65 10 87 94 35 19 26 153 14 45 40 161 72 23

2 2 6 4 12 8 2 9 7 3 3 3 25 2 10 7 20 6 4

HCP-scores top 10% top 1% 2.5 4.1 2.1 2.3 1.9 2.1 2.2 1.8 2.4 2.9 2.6 1.7 1.7 2.1 1.6 2.3 1.5 2.2 1.9

4.2 13.8 2.1 3.7 1.6 2.7 4.9 1.8 1.8 2.1 4.5 2.0 2.8 2.9 3.3 4.0 1.8 1.7 3.4

Journal qualityc all top 10% top 1% 1.4 1.1 1.1 1.0 1.3 1.1 1.2 1.3 1.3 1.5 1.2 1.1 1.2 1.9 1.2 1.2 0.9 1.0 1.1

3.2 1.3 1.6 1.3 1.9 1.5 1.7 1.7 1.9 1.7 2.0 1.4 1.9 2.9 1.7 1.9 1.2 1.4 2.3

3.4 1.2 1.6 1.4 2.2 1.8 2.0 1.9 2.5 1.4 3.1 1.5 2.2 3.5 2.1 2.5 1.6 1.5 4.9

Peerd scores n.a. n.a. 4 out of 11 3 out of 8 1 out of 10 6 out of 15 (see Biol.) 2 out of 12 3 out of 11 (see Chem.) 1 out of 10 1 out of 3 n.a. n.a. 3 out of 9 1 out of 11 3 out of 12 1 out of 12 na

a

5-year moving citation impact windows: 1990-94, 1991-95, etc. Publication counts based on fractional counting across (related) disciplines; rounded figures shown. Lower thresholds for inclusions: top 10% – 5 papers; top 1% – 2 papers. c The average number of citations received by all the 1990-94 papers in a journal compared to the average of all journals assigned to the same discipline (excluding author self-citations). d Number of research groups or programs that received the qualification ‘(almost) excellent’ (scores 4.5 or 5 on a 5-points scale) by external review boards of international experts that were assigned by VSNU during the years 1995-2000 (source material: various VSNU disciplinary evaluation reports). n.a. – data not available due to absence of VSNU evaluation or university research programs that can be matched to the ISI journal-based disciplines. Sources: CWTS/ISI Science Citation Index, Specialty Citation Indexes. b

In view of the ongoing internationalization of science and the stratification processes in worldwide scientific reputation one would expect CoSEs to publish their best papers in high-profile journals of above average quality, i.e., journals receiving more citations than the average journal in the discipline asindicated by a score of 1 on the ‘journal quality’ index defined in Table 3. The results confirm our hypothesis by showing that only 3 out of the 19 cases have scores of 1 or slightly below 1. Obviously, to some extent one would expect the top 10%/1% highly cited papers to be published in the highly cited journals of the discipline given the fact that it is exactly these highly papers that contribute to those

Scientometrics 54 (2002)

393

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

high journal citation scores. However, it is interesting to see that in the large majority of the cases the journal quality is significantly higher for both the top 10% and top 1% cited papers. Based on these first findings one can draw the tentative conclusion that CoSEs seek out the most highly cited journals for disseminating all their research papers in general, and their highly cited papers in particular. As for the peer reviews assessments, we collected the findings from a series of university research performance evaluations conducted in the mid and late 1990s by international peer committees under the aegis of VSNU. Of the 14 cases where an appropriate match or an acceptable degree of similarity could be established between the ISI-journal based disciplines and the VSNU-defined academic research fields, we find that in each and every case at least one of their research units or programs was considered ‘excellent’ according to the standards of the international experts.* The outcome of both validation studies therefore presents a strong case for substantiating the claim that these selected cases are indeed CoSEs by generally accepted international standards. Comparing the results in Tables 2 and 3 one can only conclude that the average citation scores of a university in its main disciplines is of limited value as a ‘predictor’ of CoSEs within the university. In fact, there are only two cases where CoSEs ought to be expected given their overall citation impact levels displayed in Table 2: TUE and UT in Chemistry. Of all other cases listed in this table with considerable publication outputs in the most highly cited disciplines, only two (related) cases emerge as CoSEs: UT in Chemical engineering and UT in Mechanical engineering. Overall, correlation analysis between the field-normalized average citation scores and the corresponding HCP values across the whole set of university/discipline cases result in coefficients of r = 0.76 for the top 10% and r = 0.67 for the top 1%. In other words, only 40-50% of the variance within the HCP ranking is accounted for by the average citation scores. Towards advanced scientometric indicators of research quality and excellence The prime methodological framework and responsible for

goal of the efforts described in this paper was to explore the drawbacks and technical feasibility of a citation-based analytical indicators for identifying possible research-performing institutions high quality science, ‘Centres of Scientific Excellence’ (CoSEs),

*

In two cases disciplines were merged within a university in order to incorporate all relevant research units: Biology and Food science at the RUG, and Chemistry and Chemical engineering at the UT. Research activity and research entities of Leiden University in materials science were covered by the VNSU physics evaluation.

394

Scientometrics 54 (2002)

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

independently of the views of subject experts or other relevant information sources. Our analyses of the Dutch research system, and our first attempt of systemic validation of these alleged university CoSEs within this system, indicate that the worldwide citation flows amongst international science journals, and the skewed distribution of those citations over institutions, enables a systemic and objective comparison of scientific excellence at several aggregate levels. Our results confirm that average citation impact scores of institutions are often inadequate indicators of their performance in terms of producing highly cited research papers in the international scientific literature. The latter therefore offer important added value as a tool for identifying and benchmarking ‘world class’ CoSEs. Clearly, reaching the top of the international citation impact league in a discipline constitutes a telltale sign of first-rate scientific contribution, worldwide impact and scientific quality. As such, highly cited research papers appear to be a good starting point for searches of excellence and scientific leadership at the aggregate level of research institutions and laboratories. However, even though citation counts are generally regarded as a useful (partial) indicator of research quality at aggregate levels, and general agreement seems to exist that citations measure international visibility and scientific influence of academic research in the ‘hard’ sciences, it is still not sufficiently clear how and to what extent citation measures actually capture the ‘intrinsic quality’ of research and ‘scientific excellence’ across a broader range of scientific fields. One of the many reasons for this state of affairs is the lack of an adequately discriminating standard for conclusive validation studies. So far, the only truly accepted measure of scientific quality is the performance of small and elite group of influential scientists which are considered ‘excellent’ and receive prestigious science prizes and appointments from international committees, the Nobel prizes being the most well-known and renowned examples of such accolades. Furthermore, citation data by definition reflect the track records and past performance of institutions and CoSEs, rather than their current standing or future scientific potential. Further (retrospective) validation studies are needed to ascertain the extent to which our findings turn out to be accurate, especially whether or not they represented good predictors of current citation rankings. Clearly, many conceptual issues and methodological problems remain that need to resolved before one can be confident that citation scores and HCP-values are able to pick out CoSEs across a wide range of countries and research areas with an acceptable degree of certainty and accuracy. According to the golden rule of evaluation, a complex phenomenon requires multiple methods and viewpoints to assess the various dimensions. This applies full

Scientometrics 54 (2002)

395

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

force for international scientific excellence which is by nature a very elusive and multifacetted phenomenon.27 Hence, fully fledged quantitative assessments and benchmarking of scientific excellence in basic research at institutional levels should include a variety of methods and information sources, including peer review ratings, to arrive at sufficiently comprehensive and balanced evidence-based assessments of an institute in terms of its type and degree of international scientific excellence – especially in the research fields where papers in international peer-reviewed journals are not the dominant vehicle for knowledge transfer. The results of our study indicate that highly cited research papers, and our newly developed HCP-index, shows some promise to become a centre piece in benchmarking scoreboards of international scientific excellence aimed at identifying and comparing a country’s ‘science jewels’. * We would like to thank Ton Nederhof and our former colleague Mieke Bos for their valuable contributions to the CWTS survey amongst the highest cited authors in the Netherlands.

References 1. COLE, S., J. COLE, Scientific output and recognition, American Sociological Review, (1967) 377–390. 2. MERTON, R. K., The Sociology of Science, Chicago: University of Chicago Press, 1973. 3. COLLINS, H. M., Knowledge, norms, and rules in sociology of science, Social Studies of Science, 12, (1982) 299–309. 4. PELZ, D. C., F. M. ANDREWS (Eds), Scientists in Organizations: Productive Climates for Research and Development (Revised edition), Ann Arbor, Michigan: Institute for Social Research, The University of Michigan, 1976. 5. HORROBIN, D., The philosophical basis of peer review and the suppression of on innovation, Journal of the American Medical Association, 263 (1990) 1438–1441. 6. MOXHAM, H., J. ANDERSON, Peer review: a view from the inside, Science and Technology Policy, (1992) 7-15. 7. INHABER, H., K. PRZEDNOWEK, Quality of research and the Nobel prizes, Social Studies of Science, 6, (1976) 33–50. 8. GARFIELD, E., Do Nobel Prize winners write citation classics, Current Contents, 23 (1986) 182. 9. GARFIELD, E., The 1991-Nobel prize winners were all citation superstars, Current Contents, 5 (1992) 3–9. 10. NARIN, F., Evaluative Scientometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. Report US National Science Foundation, 1976. 11. WEINSTOCK, N., Citation indexes, In: KENT, A. (Ed.). Encyclopedia of Library and Information Science, New York: Marcel Dekker, Vol. 5, 1971, pp. 16–41. 12. EDGE, D. O., Quantitative measures of communication in science: A critical review, History of Science, 17 (1979) 102–134. 13. MACROBERTS M. H., B. R. MACROBERTS, Problems of citation analysis: a critical review, Journal of the American Society for Information Science, 40 (1989) 342–349.

396

Scientometrics 54 (2002)

R. J. W. TIJSSEN et al.: Benchmarking international scientific excellence

14. COZZENS, S.E., What do citations count? The Rhetoric-first model, Scientometrics, 15 (1989) 437–447. 15. LUUKONEN, T., Why has Latour’s theory of citations been ignored by the bibliometric community? Discussion of sociological interpretations of citation analysis, Scientometrics, 38 (1997) 27–37. 16. LEYDESDORFF, L., Theories of citation? Scientometrics, 43 (1998) 5–25. 17. VAN RAAN, A. F. J., The pandora’s box of citation analysis: measuring scientific excellence – the last evil? In: CRONIN, B., H. BARSKY ATKINS (Eds), The Web of Knowledge – A Festschrift in Honor of Eugene Garfield, Medford (NJ): ASIS Monograph Series, 2000, pp. 301–319. 18. RINIA E. J., T. N. VAN LEEUWEN, H. G. VAN VUUREN, A. F. J. VAN RAAN, Comparative analysis of a set of scientometric indicators and central peer review criteria: evaluation of condensed matter physics in the Netherlands, Research Policy, 27 (1998) 95–107. 19. Citations reveal concentrated influence: some fields have it, but what does it mean?, Science Watch, 10 (1999) 1. 20. SEGLEN, P. O., The skewness of science, Journal of the American Society for Information Science, 43 (1992) 628–638. 21. AKSNES, D. W., G. SIVERTSEN, The effect of highly cited papers on national citation indicators, Proceedings of the 8th International Conference on Scientometrics and Informetrics, Sydney, Australia, July 2001, Sydney: University of New South Wales, 2001, pp. 23–30. 22. EUROPEAN COMMISSION, Progress Report on Benchmarking of National Policies, Brussels: Commission staff working paper, SEC(2001) 1002, 2001. 23. TIJSSEN, R. J. W., TH. N. VAN LEEUWEN, H. HOLLANDERS, B. VERSPAGEN, Wetenschaps- en Technologie-Indicatoren 2000, Leiden/Maastricht: CWTS/MERIT, 2001. 24. RIP, A., B. VAN DER MEULEN, Post-modern science system, Science and Public Policy, 23 (1996) 343–352. 25. VAN DER MEULEN, B., A. RIP, Mediation in the Dutch science system, Research Policy, 27 (1998) 757–769. 26. MOED, H. F., T. N. VAN LEEUWEN, M. S. VISSER, Trends in the publication output and impact of universities in the Netherlands, Research Evaluation, 8 (1999) 60–67. 27. MARTIN, B. R., The use of multiple indicators in the assessment of basic research, Scientometrics, 36 (1996) 343–362.

Received October 12, 2001. Address for correspondence: ROBERT J. W. Tijssen Centre for Science and Technology Studies (CWTS) Leiden University, Leiden, The Netherlands E-mail: [email protected]

Scientometrics 54 (2002)

397