19th Annual CHER conference, Kassel, 7-9 September 2006
Global university rankings: private and public goods Simon Marginson Centre for the Study of Higher Education, University of Melbourne, Australia
[email protected]
Introduction Whole of institution university rankings simplify the complex world of higher education in two areas of great public and private interest: institutional performance, and institutional status. They emphasise vertical differences between institutions and between nations, differences of power and authority. They obscure horizontal differences, differences of purpose and type. Rankings are easily recalled, especially in the form of league tables, and quickly become part of common sense knowledge of the sector. Despite the attractions of horizontal diversity league tables have a compelling popularity, regardless of questions of validity, of the uses to which the data are put, and of the effects in system organisation. Institutional rankings from both private and governmental sources have long been used in national systems, sometimes to guide allocations of public funds. In the United States the annual US News and World Report (USNWR) survey, which commenced in 1983, has been very influential in determining institutional prestige and influencing flows of students, faculty and resources, and in shaping institutional strategies designed to maximise the US News position, for example by pushing up student scores and apparent rejection rates, often by shifting student aid from needs based support to merit support. In some American universities enrolment managers are paid even more than football coaches, and far more than the President. In China several systems of rankings are in use (Liu & Liu, 2005). Now, the world rankings from Shanghai Jiao Tong University, the Times Higher Education Supplement and now Newsweek - which selects and combines data from the first two systems - have universalised the rankings culture. Though they were variously devised, at the top of both the Jiao Tong and The Times rankings were the household names: Harvard, Stanford, Yale, Berkeley, MIT, Cambridge and Oxford. Instant plausibility. These rankings fed into the longstanding Ivy League imagining of itself as a kind of global aristocracy in its proud towers, recalling Bourdieu’s remark in Distinction (1984) about status as the ‘aristocracy of culture’. It is not surprising that media companies are often in the forefront of rankings. University league tables sell. But there is more at stake here than the interests of Rupert Murdoch or US News. Rankings define the purposes, outputs and values of higher education and interpret it to the world at large, in a fashion that is far more compelling than either the policy reports of governments or the reasoned analyses of scholars of higher education. This suggests that we have a problem. Further, worldwide rankings norm world higher education as a single global market of essentially similar institutions able to be arranged in a ‘league table’ for comparative purposes. Rankings have given a powerful impetus to intranational and international competitive pressures and are already changing national policy objectives and institutional behaviours.
2 While there has been disquiet in higher education about the impact of the rankings, and numerous instances of critique of the methods (especially in institutions and nations where performance was below self-expectation) it is notable that there have been few concerted efforts to discredit the process. It appears that global ranking has secured mainstream public and policy credibility, and institutional acquiescence. Given this, research universities are impelled to succeed within the terms of the measures and will adopt institutional policies and strategies that optimise their position, especially their position in the Shanghai Jiao Tong rankings which are based on credible metrics. Rankings have exacerbated competition for the leading researchers and best younger talent, and are likely to drive up the price of high performing researchers and research groups. Within national systems, the rankings have prompted the desire for more and higher ranked universities both as symbols of national achievement and prestige and supposedly, as engines of the knowledge economy. There is a growing emphasis on institutional stratification and research concentration. All these responses have cemented the role of the rankings themselves and further intensified competitive pressures. In this competition, institutions and nations with the best competitive capacity tend to reproduce their success. ‘The fact is that essentially all of the measures used to assess quality and construct rankings enhance the stature of the large universities in the major Englishspeaking centres of science and scholarship and especially the United States and the United Kingdom’ (Altbach, 2006). We might want to consider whether those changes are desirable, and whether the changes should be (and can be) changed. In this paper I will look at the private and public goods, national and global, that are served by rankings, the public and private ‘bads’ simultaneously being created by rankings, and the potential of different rankings systems to change that mix of private and public goods. For further discussion of the concepts of private and public goods, please see my keynote address to CHER in 2005, ssson to be published in Higher Education (Marginson, in press). By public goods in higher education I mean goods that (1) have a significant element of non-rivalry and/or non-excludability, and (2) are goods that are made broadly available across populations. Goods without attributes (1) and (2) are private goods. By global public goods I mean goods with a significant element of non-rivalry and/or nonexcludability and made broadly available across populations, on a global scale. They affect more than one country and are broadly available within countries. Global public goods include collective global goods, and positive or negative global externalities. Negative externalities are known as public ‘bads’. Collective global goods are obtained by nations and/or institutions from cross-border systems common to the world or a meta-national region, via regulation, systems and protocols; such as the Washington Accords in Engineering, and the Bologna Declaration of a common European higher education space and research area. Global externalities arise when higher education in one nation affects a significant number of people in other nations; either for better, for example the flow of research from one nation to others; or worse, for example ‘brain drain’ of national faculty (Marginson, in press; Kaul et al., 1999; Kaul et al., 2003). Shanghai Jiao Tong University rankings Technically the Shanghai Jiao Tong University (SJTU) rankings do not constitute a holistic comparison of universities, but, despite the efforts of the SJTUIHE group, they have been
3 widely interpreted as such.i The major part of the SJTU index is determined by publication and citation in the sciences, social sciences and humanities: 20 per cent citation in leading journals; 20 per cent articles in Science and Nature; and 20 per cent the number of Thomson/ISI ‘HiCi’ researchers on the basis of citation (ISI, 2006). Another 30 per cent is determined by the winners of Nobel Prizes in the sciences and economics and Fields Medals in mathematics, in relation to their training (10 per cent) and their current employment (20 per cent). The remaining 10 per cent is determined by dividing the total derived from the above data by the number of staff (SJTUIHE, 2006). The SJTU rankings favour universities large and comprehensive enough to amass strong research performance over a broad range of fields, while carrying few research inactive staff. They also favour universities particularly strong in the sciences, universities from English language nations because English is the language of research (non English language work is published less and cited less) and universities from the large US system, as Americans tend to cite Americans (Altbach, 2006). 3614 of the Thomson/ISI ‘HighCi’ researchers are in the USA. This compares with 224 in Germany, 221 in Japan, 162 in Canada, 138 in France, 101 in Australia, 94 in Switzerland, 55 in Sweden, 20 in China and none in Indonesia (ISI, 2006). Harvard and its affiliated institutes alone have 168 HiCi researchers, more than the whole of France or Canada. Stanford has 132 HiCi researchers, more than all the Swiss universities together; UC Berkeley 82 and MIT 74. There are 42 at the UK University of Cambridge. The Nobel Prize criterion is the most controversial. The prizes are submission based; scientific merit is not the only determining factor; there is potential for politicking to enter decisions. Bloom (2005, p. 35) notes that of the 736 Nobel Prizes awarded till January 2003, 670 (91.0 per cent) went to people from high-income countries as defined by the World Bank, the majority to the USA, with 3.8 per cent from Russia/Soviet Union and Eastern Europe, and 5.2 per cent from the emerging and developing nations. The last had their best prospect of winning a Nobel Prize for Literature (10.1 per cent) or Peace (19.8 per cent) but these are excluded from the SJTU index. Of the nine scientists from emerging or developing countries with Nobels in Chemistry, Physics, Physiology or Medicine, four were working in the USA, two in the UK and Europe. What public and private goods are achieved by the Shanghai Jiao Tong rankings? 1. To the extent they derive from metrics of actual research output and its scholarly use – that is, the 60 per cent of the index based on publication and citation – they provide credible data on the distribution of past worldwide research activity and by implication, present research capacity. This is useful information that we did not have and we now share, it can be used to guide public and private decisions related to research activity in universities, and as such constitutes a global public good. Arguably, data concerning research outputs and citation specific to discipline would be more useful than whole of institution data, however. 2. On the other hand, to the extent that the Jiao Tong rankings are used as a surrogate whole of institution measure or as a de facto guide to where to find good quality teaching, they are likely to be associated with incomplete, misleading or bad decisionmaking, constituting public bads. 3. The Jiao Tong rankings constitute private status goods for institutions that score well, and negative goods for those institutions that score badly.
4 4. The rankings are likely to generate further competitive outcomes that in the longer term build the absolute (and probably relative) private goods accruing to the successful institutions and also their leading researchers. Correspondingly, the rankings also create public bads for institutions that lose status and resources within this circulation of competitive effects. 5. The Jiao Tong rankings contain incentives to shift university and government resourcing from teaching to research, with the potential to reduce the public and private benefits derived from teaching. 6. The rankings might generate national public goods in many nations by stimulating improved research performance – if the encouraging effects outweigh the discouraging effects. The jury is still out on that. In some nations at least, a low rankings position will feed into the policy assumption that it is unrealistic to develop a national research capacity. 7. Even more tendentiously, the rankings might lead to augmentation of worldwide research outputs in quantity terms, and hence global goods. 8. But if so the rankings will stimulate certain kinds of research output at the expense of others – big science activity, and English language work, are encouraged relative to other forms of scholarship and research. By encouraging homogenisation, for example by downplaying the value of work in languages other than English, the competitive logic of rankings reduces the potential for national public goods in many if not all nations. 9. Arguably, cultural and scientific diversity is a global public good, and the condition necessary for many other goods, public and private, national and global. To the extent that global rankings of the Shanghai Jiao Tong type tend to narrow the potential diversity they reduce global public goods. The Times Higher rankings of universities Let’s look more briefly at the rankings produced by the Times Higher. The Times promises us ‘the best guide to the world’s top universities’ and a wholistic ranking rather than one limited to research (Times Higher, 2006). It is not alone in this. Most rankings systems purport to ‘evaluate universities as a whole’ (Dyke, 2005, p. 106). Usher and Savino (2006) note the arbitrary character of the weightings used to construct composite indexes covering different aspects of quality or performance. ‘The fact that there may be other legitimate indicators or combinations of indicators is usually passed over in silence. To the reader, the author’s judgment is in effect final’ (Usher and Savino 2006, p. 3). As Rocki (2005, p. 180) notes in reflecting on the Polish experience: ‘The variety of methodologies, and thus of criteria used, suggest that any single, objective ranking could not exist’. Composite approaches muddy the waters and undermine validity. It is dubious to combine different purposes and the corresponding data using arbitrary weightings. Links between purposes and data are lost. Likewise it is invalid to mix subjective data on reputation with objective data on resources or research outputs. This does not stop the Times promising us the ultimate indicator, and it does so be breaching these principles of method. The Times rankings are particular, even idiosyncratic, suggesting that they have been crafted to achieve imagined ends. Compared to the Jiao Tong, the Times places a high value on institutional reputation and also focuses on the level of ‘internationalisation’: These rankings appear to have been designed to service the market in cross-border degrees in which UK universities are highly active. A total of 40 per cent of the Times index is comprised by an
5 international opinion survey of academics and another 10 per cent by a survey of ‘global employers’. There are two internationalisation indicators: the proportion of students who are international (5 per cent) and the proportion of staff (5 per cent). Another 20 per cent is determined by the student-staff ratio, a proxy for teaching ‘quality’. The remaining 20 per cent is comprised by research citation performance. Compared to the Jiao Tong outcome the Times rankings boost the number of leading British universities and reduce the US universities in the world’s top 100 from 54 to 31. However the Times Higher rankings are open to methodological criticisms. Reputational surveys indicate the market position of different institutions but not their merits, a distinction the Times fails to make. The surveys are nontransparent. It is not specified who was surveyed or what questions were asked. Further, the student internationalisation indicator rewards volume building not the quality of student demand or programs; teaching quality cannot be adequately assessed using a resource quantity indicator such as student-staff ratios; and research plays a minor role in the index. The Times Higher rankings reward a university’s marketing division better than its researchers. Further, by focusing on criteria relevant to the cross-border degree market, the Times rankings creates anomalies. It inflates the performance of Australian universities, which achieve a massive 12 universities in the world’s top 100, compared to Canada which has a similar system in many respects but with stronger research performance, better funding and higher participation. Canada had three universities in the Times top 100. This kind of outcome shows that the Times rankings are a rigged game. The company that owns the Times is Murdoch’s News Limited. Murdoch’s antecedents are Australia and the UK and it is these nations that benefit. This is the politics of the global higher education space played out as if it is somebody’s private garden to be arranged at will. Aside from reproducing the world which made Murdoch great, what mix of public and private goods are served by the Times intervention? 1. Unlike the Jiao Tong rankings the Times rankings fail to provide credible data that might guide public and private decision-making. 2. On the other hand, like the Jiao Tong rankings, the Times rankings constitute private status goods for those institutions that score well, and negative goods for those institutions that score badly. In the international market in degrees it creates immediate benefits for highly ranked universities, regardless of the qualities of their research and teaching. 3. And again, the Times rankings are likely to generate competitive outcomes that in the longer term will build the absolute (and probably relative) private goods accruing to the successful institutions; while the rankings also create public bads for those institutions that lose status and resources. 4. The Times rankings contain incentives to shift university and government resourcing from teaching to the categories rewarded by the index: principally reputation building, but also research, better staff-student ratios and internationalisation. This has the potential to reduce the public and private benefits derived from teaching. 5. Unlike the Jiao Tong rankings, the Times rankings fail to generate a strong dynamic of accumulation of research quantity and quality, given that research citation comprises just 20 per cent of the index. They are likely to stimulate the accumulation of marketing, but only someone from that industry would argue that this constitute national and global public goods.
6 Summary: problems of rankings Usher and Savino (2006) look at 19 league tables and university rankings systems from around the world. Like Van Dyke (2005) they make the point that the different rankings systems are driven by different purposes and are associated with different notions of what constitutes university quality. Each ranking system norms particular notions of higher education and its benefits and effects. In the Jiao Tong universe, higher education is scientific research. It is not teaching or community building or democracy or solutions to local or global problems. In the Times universe, higher education is primarily about reputation for its own sake, about the aristocratic prestige and power of the universities as an end in itself, and also about making money from foreign students. It is not about teaching and only marginally about research. To accept these ranking systems is to acquiesce at these definitions of higher education and its purposes. Let us summarise the key problems generated by these rankings systems, problems that reduce or undermine the potential for public goods.. First, whole of institution rankings norm one kind of higher education institution with one set of institutional qualities and purposes, and in doing so strengthen its authority at the expense of all other kinds of institution and all other qualities and purposes. The Jiao Tong rankings not only norm comprehensive research universities, their blueprint is a particular kind of science-strong university in the Anglo-American tradition. Around the world there is considerable variation in the size, scope and functions of leading research universities. The 200,000-300,000 student national universities in Mexico City and Buenos Aires combine national research leadership with professional preparation and broad-based social access and necessarily carry a large group of non-researching staff, disadvantaging them in Jiao Tong index. Further, there are no cross-national measures of the performance of vocational education systems or institutions equivalent to the SJTUIHE measures in research universities. While in most nations vocational education commands lesser status than research-based universities the German Fachhochschulen (vocational technical universities), relatively well resourced and with equivalent status to academic universities plus links to industry, are in high international standing. Similar comments can be made about vocational provision in Finland, Switzerland and France. Another model in high regard is the Indian Institutes of Technology (IITs). But in the absence of policy moves to shore up diversity by other means, attention to global research rankings may weaken the standing of non-research institutions and trigger the evolution of more unitary but vertically differentiated systems. There is no reason to assume that intensified competition will generate more national or global specialisation unless the incentive structure concurs. Second, rankings become an end in themselves without regard to exactly what they measure or whether they contribute to institutional and system improvement. ‘League tables’ become highly simplistic when treated as summative but this is normally the case. The desire for rank ordering overrules all else. A common problem is that in rankings systems institutions are rank ordered even where differences in the data are not statistically significant. Third, rankings divert our attention from some of the central purposes of higher education. No ranking or quality assessment system has been able to generate comparative data based on measures of the ‘value added’ during the educational process, and few comparisons focus on teaching and learning at all (Dill & Soo, 2005, p. 503 & 505) though such data might be useful for prospective students.ii Instead there are various proxies for teaching ‘quality’ such as quantity resource indicators, student selectivity and research performance. But ‘empirical
7 research … suggests that the correlation between research productivity and undergraduate instruction is very small and teaching and research appear to be more or less independent activities’ (Dill and Soo, 2005, p. 507). And data on student selectivity simply provide measures of reputation. Fourth, reputational surveys generate numerous lacunae and perverse effects. When holistic rankings of institutions become centred on measuring and/or forming reputation, and the measures derive from selectivity of entry and research status, the terms of inter-institutional competition are being defined by credentialism rather than by the formative outcomes of higher education. The implication is that students’ only concern is the status of their degrees not what they learn. They favour universities already well known regardless of merit, degenerating into ‘popularity contests’ (Altbach, 2006). And they are open to the charge that they simply recycle and augment existing reputation (Guarino et al., 2005, p. 149) regardless of whether it is grounded in the real work of institutions or not. ‘Raters have been found to be largely unfamiliar with as many as one third of the programs they are asked to rate’ (Brooks, 2005, p. 7). Well known university brands generate ‘halo’ effects. For example one American survey of students ranked Princeton in the top 10 Law schools in the country, but Princeton did not have a Law school (Frank and Cook, 1995, p. 149). Any system of holistic lobal rankings tends to function as a reputation maker that entrenches competition for prestige as a principal aspect of the sector and generates circular reputational effects that tend to reproduce the pre-given hierarchy. Reputational rankings are the worst form of ranking, in that they generate the least public goods, the most public bads, and the most selective distribution of private goods. At the same time they are accessible, appear credible, and are easy to generate. A better approach to university rankings Given that global university rankings are a potent device for framing higher education on a global scale, it seems better to enter (rather than abstain from) the debate on this framing (Marginson & van der Wende, forthcoming). And better take stock of rankings on a multilateral basis than solely to respond to them individually. It is important to secure ‘clean’ rankings; transparent, free of self interest and methodologically coherent. What then might be the strategic escape from the problems and dilemmas that the present rankings systems create – while being mindful of the fact that there will be a continuing demand for data of a comparative kind, and someone is going to satisfy that demand? The strategy might consist of six elements: • Experts on higher education ought to become vigorous actors in the debates about particular rankings, their purposes, interpretations and flaws. The objective should be to achieve scholarly control over judgments and calculations of comparative quality and performance. • Reputational rankings should be completely rejected. • Composite approaches across heterogenous purposes should be rejected. • Whole of institution approaches should be rejected. • We can note here that the Jiao Tong rankings, which are subject to social science, are qualitatively superior to the Times Higher, though they would be better without the Nobels and fields medals, and better steered into discipline-based rankings and away from whole of institution approaches. • We should foster rankings that are grounded in the purpose driven aspect of judgements about quality and performance. Rankings tailored to specific and
8 transparent purposes and interpreted only in the light of those purposes can provide useful data for the purposes of students, university self-reflection and public accountability. Policy-related research should facilitate a broad range of comparative measures, corresponding to the different purposes, enabling a horizontal approach to diversity and choice. This suggested that institutions should not be ranked as a whole but on their various functions taken separately, including the different aspects of research and teaching, and the different disciplines, locations and discrete service functions. The system of rankings should be based on a transparent balance of facts about performance, and perceptions of performance based on peer review. Ranking methods should generate information relevant for different stakeholders and provide data and information that are internationally accessible and comparative. Because ‘quality is in the eye of the beholder’, ranking should be interactive for users, particularly students. Users should be able to interrogate the data on institutional performance using their own chosen criteria. In terms of ownership, it is important that institutions are involved and committed to maximum openness. Institutions operating on a broad basis (preferably not just national but regional) should establish an independent agent to collect, process and analyse data, and undertake publication with a designated media partner that operates as the agent of communication rather than the arbiter of values and methodologies. The system of rankings which most nearly meets these requirements is that developed by the Centre for Higher Education Development (CHE) in Germany (www.che.de) and issued in conjunction with the publisher Die Zeit (Ischinger, 2006). This system includes data on all higher education institutions in Germany and I understand that the Netherlands and Belgium (Flanders) are preparing to join, and some Nordic institutions are also showing interest. The CHE ranking system is thus well positioned to develop into a European-wide system. It has also received positive responses from parts of the English-speaking world (Usher and Savino, 2006; Van Dyke, 2005). The chief virtue of the CHE rankings, which has far-reaching implications for the form of competition in higher education, is that it dispenses with a spurious holistic rank ordering of institutions and instead provides a range of data in specific areas including single disciplines. CHE notes there is no ‘one best university’ across all areas and ‘minimal differences produced by random fluctuations may be misinterpreted as real differences’ in holistic rankings systems. Further, the CHE data are provided via an interactive web-enabled database permitting each student to examine and rank identified programs and/or institutional services based on their chosen criteria (CHE, 2006) and to decide how the different objectives are weighted. Compared to the Jiao Tong and Times approach the CHE rankings create a superior package of private and public goods: 1. The CHE rankings provide more extensive data on the quality and performance in higher education by covering more and more detailed indicators, and enabling these to be varied by purpose. This both augments the public goods produced, and facilitates private goods by resourcing decisions concerning individual investments in higher education. 2. In particular, the CHE rankings foreground teaching and services, broadening the range of public and private goods in a vital area.
9 3. The CHE ranking system is flexible as to cultural issues, while also being capable of extending to a broad range of countries and so extending the ambit of the zone of common decision-making. In doing so it augments the potential for global public goods and the potential for variety in them. 4. The CHE rankings generate a dynamic of continuous improvement on a win-win basis, rather than feeding into a zero-sum game in which only some will be motivated to pursue continuous improvement. 5. The CHE rankings provide less private status goods to successful institutions, while also providing less private bads to the unsuccessful. 6. The CHE rankings avoid the downsides flowing from the whole of institution approaches of Jiao Tong and the Times, such as discouragement effects and the tendency to reproduce pre-given hierarchies. Notes i
The SJTUIHE group argues that the only data sufficiently reliable for the purpose of ranking are broadly available and internationally comparable data of measurable research performance (Liu & Cheng, 2005, p. 133). It is considered impossible to compare teaching and learning worldwide ‘owing to the huge differences between universities and the large variety of countries, and because of the technical difficulties inherent in obtaining internationally comparable data’. Further, the SJTUIHE group did not want to employ subjective measures of opinion or data sourced from universities themselves as used in some rankings systems. An additional rationale for using research performance data is that arguably research is the most important single determinant of university reputation and widely accepted as merit-based. The SJTUIHE has consulted widely throughout the higher education world on the calculation of the index and compilation of the data. The successive measures have proven to be increasingly robust; this is the rankings system that looks most likely to survive. ii Altbach states ‘there are, in fact, no widely accepted methods for measuring teaching quality, and assessing the impact of education on students is so far an unexplored area as well’ (Altbach, 2006; see also Guarino et al. 2005, p. 149).
10 References Altbach, P., (2006) The dilemmas of ranking, International Higher Education, 42. Bloom, D. (2005) Raising the pressure: globalization and the need for higher education reform, in G. Jones, P. McCarney and M. Skolnik (eds.) Creating Knowledge: Strengthening Nations: The changing role of higher education. University of Toronto Press, Toronto, pp. 21-41. Bourdieu,. P. (1984) Distinction: A social critique of the judgment of taste, transl. R. Nice, Routledge & Kegan Paul, London Brooks, R. (2005) Measuring university quality, The Review of Higher Education, 29 (1), pp. 1-21. Center for Higher Education Development, CHE (2006) Study and Research in Germany. University rankings, published in association with Die Zeit. Accessed on 16 March 2006 at: http://www.daad.de/deutschland/studium/hochschulranking/04708.en.html Dill, D. & Soo, M. (2005). Academic quality, league tables, and public policy: a cross-national analysis of university rankings, Higher Education. 49, pp. 495-533. Dyke, N. van (2005) Twenty years of university report cards, Higher Educaiton in Europe, 30 (2), pp. 103-125 Frank, R. and Cook, P. (1995). The winner-take-all society. The Free Press, New York. Guarino. C., Ridgeway, G., Chun, M. and Buddin, R. (2005) Latent variable analysis: A new approach to university ranking, Higher Education in Europe, 30 (2), pp. 147-165. Institute for Scientific Information, Thomson-ISI (2006) Data on highly cited researchers, ISIHighlyCited.com. Accessed 15 August 2006 at: http://isihighlycited.com/ Ischinger, B. (2006) Higher education for a changing world, OECD Observer, June. Kaul, I., Conceicao, P., le Goulven, K. and Mendoza, R. (eds.) (2003) Providing Global Public Goods: Managing globalisation. Oxford University Press, New York. Kaul, I., Grunberg, I. & Stern, M. (eds.) (1999) Global Public Goods: International Cooperation in the 21st century. Oxford University Press, New York. Liu, N. and Cheng, Y. (2005) The academic ranking of world universities, Higher Education in Europe, 30 (2), pp. 127-136. Liu, N. and Liu, L. (2005) University rankings in China, Higher Education in Europe, 30 (2), pp. 217-227. Marginson, S. (in press). The public/private division in higher education: a global revision, Higher Education. Marginson, S. and van der Wende, M. (forthcoming) Globalisation and higher education, paper prepared for OECD Higher Education Futures project Rocki, M. (2005) Statistical and mathematical aspects of ranking: lessons from Poland, Higher Education in Europe, 30 (2), pp. 173-181. Shanghai Jiao Tong University Institute of Higher Education, SJTUIHE (2006). Academic ranking of world universities. Accessed 1 September 2006 at http://ed.sjtu.edu.cn/ranking.htm The Times Higher (2006) World University Rankings, The Times Higher Education Supplement, Originally published on 28 October 2005. Accessed on 10 April 2006 at: www.thes.co.uk Usher, A. and Savino, M. (2006). A World of Difference: A global survey of university league tables. Accessed on 2 April 2006 at http://www.educationalpolicy.org