University ranking systems, prestige and power

3 downloads 0 Views 141KB Size Report
Apr 24, 2013 - demic scholarship, the sociology of health and medicine. .... Oxford, nor the Ivy League universities of the USA (see Bulmer, 1985 for the ..... and Nature, 20% from a list of highly cited authors in Thomson ISI, 30% from Nobel.
This article was downloaded by: [Fran Collyer] On: 24 April 2013, At: 01:39 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Critical Studies in Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/rcse20

The production of scholarly knowledge in the global market arena: University ranking systems, prestige and power Fran Collyer

a

a

Department of Sociology and Social Policy, University of Sydney, Sydney, Australia Version of record first published: 24 Apr 2013.

To cite this article: Fran Collyer (2013): The production of scholarly knowledge in the global market arena: University ranking systems, prestige and power, Critical Studies in Education, DOI:10.1080/17508487.2013.788049 To link to this article: http://dx.doi.org/10.1080/17508487.2013.788049

PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-andconditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Critical Studies in Education, 2013 http://dx.doi.org/10.1080/17508487.2013.788049

The production of scholarly knowledge in the global market arena: University ranking systems, prestige and power Fran Collyer* Department of Sociology and Social Policy, University of Sydney, Sydney, Australia

Downloaded by [Fran Collyer] at 01:39 24 April 2013

(Received 22 January 2013; final version received 16 March 2013) The relationships between disciplines and the institutions within which they are situated is a fertile area for researching the shaping of sociological knowledge. Applying theoretical insights from the sociology of knowledge, this article draws on an empirical study of research publications in the sociology of health and medicine to show which institutions in the Australian context are most likely to use sociological theory. When the institutions are positioned within the global university ranking system, an inverse association between sociological theory and the relative wealth and prestige of the originating institution becomes evident. Some of the implications of this finding are discussed with reference to the on-going viability of disciplines. Keywords: academic capitalism; higher education; knowledge production; sociology; transnationalism; university rankings

Introduction Universities and their institutional operations have never been entirely independent of the demands of philanthropic benefactors, aristocratic or entrepreneurial elites or even nation states. Nevertheless, these institutions have, since the 1960s, steadily lost their traditional right to self-governance, and from the 1990s, increasingly engaged with corporate, government and community sectors to supplement the steady decline in financial support from the public purse (Kayrooz, Kinnear, & Preston, 2001; Williams, 2010). Such changes have stimulated debate over the implications for university governance, for the work practices of intellectual workers, and for disciplines and scholarship, but also whether this may be the harbinger of a new system of production and exchange in expert knowledge. Several scholars have addressed this question. Marginson (2009, p. 10), for example, sees the emergence of an additional layer of activities on top of the old industrial capitalist economy. This new knowledge economy does not displace the existing system, but ‘seems to share the world comfortably with its predecessor’. Nevertheless, it is associated with profound disruption as knowledge-intensive production, and flows of information, multiply at unprecedented rates. For Marginson, this brings the potential for a re-organisation of the old ‘Ivy League status order’, removing the privileges of the English-language institutions of America and Europe and offering the chance for a more diverse research and educational system (Marginson, 2010, pp. 6962, 6978).

*Email: [email protected] © 2013 Taylor & Francis

Downloaded by [Fran Collyer] at 01:39 24 April 2013

2

F. Collyer

Not all share such optimism. Drahos and Braithwaite (2002), for instance, see the market system itself as having been re-configured, for in this new historical period, knowledge is not merely a means to power and market advantage but the very ‘source of profits in modern global markets’ (Drahos & Braithwaite, 2002, pp. 39, 52). Drahos and Braithwaite analyse this new development in terms of a ‘knowledge game’, within which nation states structure the market not only by enacting national legislation but also by entering into international trade agreements concerning intellectual property rights and copyright access. And in regulating the system and protecting knowledge as ‘private property’, nation states ensure that the larger share of the benefits ends up not with the inventors of knowledge, but among corporate players with the capacity to erect barriers around these knowledge products (e.g., through licensing arrangements) and defend them in both the legal and political arenas. According to the ‘rules’ of the ‘knowledge game’, the academic market operates somewhat differently to other commodity markets such as software or pharmaceuticals. Unlike these, scholarly knowledge is rarely ‘owned’ outright but endlessly recycled with each access incurring a cost. As a consequence, even though much of the knowledge has been produced within public institutions, the uploading of knowledge by corporations into privately owned journals and databases means that universities and other institutions must pay for policy-makers, managers, academics and students to access the products (Drahos & Braithwaite, 2002, pp. 4, 15). Hence, the resultant ‘game’ is one in which the countries behind the development of the intellectual property and copyright systems are the major beneficiaries, with developing countries being net importers of knowledge. Australia, for instance, despite its developed country status and significant capacity for knowledge production, has nevertheless paid out more in licensing and patent fees than it has received (Drahos & Braithwaite, 2002, p. 11). A somewhat different perspective on the university sector is taken by Slaughter and Rhoades (2004). Labelling the developments as ‘academic capitalism’, they too focus on the growing market for knowledge products and the escalation of market-like processes and activities. As such, academic capitalism describes one of the key sectors in the development of a contemporary, global, knowledge-based form of capitalism (Robinson, 2004; Sklair, 2002), where education policy is increasingly produced in a transnational space, controlled jointly by the market and various state and international bureaucratic mechanisms that set the objectives and measure performance (Moutsios, 2010, pp. 122, 125). Although questions remain about precisely how universities contribute to this historical form of capitalism, and the extent to which they have become integrated into the logic of the market (Kauppinen, 2013, pp. 3–4), there is some suggestion of an ‘unevenness’ in the set of transformations now known as academic capitalism. While changes have penetrated to the level of university departments and the ‘heartland’ of educational activity: the transition to an entrepreneurial culture is very much incomplete, uneven, and even contested . . . perhaps most contested in regard to an on-going commitment to traditional conceptions of academe’s role in conducting fundamental research. (Slaughter & Rhoades, 2004, p. 203)

This specific arena of contestation cries out for greater examination. Though the changing conditions of academic work have come under some scrutiny (e.g., Connell, Wood, & Crawford, 2005; Kayrooz et al., 2001; Lafferty & Fleming, 2000), little research has focused on the kind of knowledge produced within the university sector and whether it has been altered along with the institutional and market environment. Has there been a

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Critical Studies in Education

3

symbiosis between more open forms of scholarly communication and that of industrial and financial capitalism as Marginson (2009, p. 14) proposes? Or do some remnants of the ‘ivory tower’ remain where traditional scholarly values and practices continue? The issue is addressed in this article by paying attention to one small arena of academic scholarship, the sociology of health and medicine. Previous studies have suggested a diversity of experiences across the disciplines, for these differentially ‘translate’ the pressures for the instrumentalisation of knowledge coming from the political and economic fields (Albert, 2003, p. 149). Some, such as the applied branches of the natural sciences and engineering, are said to respond more readily to the processes of academic capitalism than the humanities and social sciences (Rhoades & Slaughter, 1998; Slaughter & Leslie, 1997). Leaving aside the question of how and why this might be the case, the study before us seeks to locate possible differences in orientation toward knowledge production between the discipline area and the university sector as a whole. Two sets of data are required for such a comparison. The first comes from an analysis of the research products of sociologists of health and medicine showing their orientation towards knowledge production and what it is that is valued within the discipline. The second is comprised of publicly accessible data about the world ranking of universities. This latter results from a new ‘industry’ developed to evaluate both disciplines and universities and purports to offer ‘objective’ indicators of the value of individual institutions in the world knowledge market. A comparison between these forms of data should indicate the extent to which there might be a symbiosis – or alternatively, a conflict – between traditional notions and practices of scholarship and the demands of the new market regime and whether such differences are shaped by the wealth and prestige of individual institutions. The section directly below examines Australian universities and their sociological output according to the standard ranking systems. We then move to an outline of the empirical study at the heart of this analysis: a context-content analysis of journal manuscripts from the sociology of health and medicine. A final section offers a short discussion of the production of sociological knowledge under academic capitalism. It ends with a reflection on some of the implications flowing from this analysis. The ranking of Australian universities for the world market Australian universities, like their counterparts in many other countries, have become increasingly subject to external political and bureaucratic control as expert knowledge has been re-fashioned as a marketable commodity. The most significant attempt at state intervention occurred with the ‘Dawkins Reform’ of the late 1980s, which forced the various institutes of technology and colleges of advanced education1 into mergers with existing universities or into new, and much larger, institutional conglomerates. This ended the binary divide between universities and ‘others’ but inaugurated a period of intense competition between institutions (Lafferty & Fleming, 2000, p. 259). It also coincided with the birth of a new public policy discourse about universities, for since that period they have been re-branded as comprising the ‘higher education sector’: the new name reflecting a focus on tertiary education as a marketable commodity and a decline in the public resourcing of research. Australia currently has over 40 universities, the majority of which are public institutions and, by definition, legally required to offer undergraduate degrees and undertake research. These institutions have varying levels of prestige and wealth. Some are almost wholly reliant on state funding while others have considerable independence as a result of significant endowments, bequests and property assets. The universities have been subjected to

4

F. Collyer

Table 1. World university rankings.

Highest rank

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Middle rank

Lowest rank

University

National rank

World rank

University of Melbourne Australian National University University of Sydney University of Queensland Monash University University of New South Wales University of Western Australia University of Adelaide Macquarie University University of Wollongong University of Newcastle Queensland University of Technology Charles Darwin University University of Tasmania Curtin University Deakin University Flinders University Griffith University La Trobe University University of South Australia Swinburne University of Technology (all other Australian universities)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

37 38 58 74 117 173 189 201–225 226–250 251–275 276–300 276–300 301–350 301–350 351–400 351–400 351–400 351–400 351–400 351–400 351–400

Source: Adapted from the Times Higher Education World University Rankings 2011–12. Universities with the same world rank range are listed alphabetically.

a variety of global ranking schemas – purporting to measure quality and/or performance – since the introduction of an annual set of league tables by the Shanghai Jiao Tong Institute in 2003 (Marginson, 2009, p. 22; Wildavsky, 2010, p. 112). The Shanghai Jiao Tong University Academic Ranking of World Universities scheme currently places 17 of our Australian universities in the top 500.2 An alternative is the Times Higher Education World University Rankings. This scheme is more favourable towards Australian institutions, putting seven Australian universities in the top 200.3 The legitimacy of both systems was accepted, according to Marginson and van der Wende (2007, p. 309), because they intuitively confirmed the existing status order, with the American and British universities – including Harvard, Stanford, Cambridge and Oxford – at their apex. The ranking of Australian universities according to the Times Higher Education World University Ranking System is shown in Table 1. Note that the ranking of Australian universities generally coincides with their date of establishment: a linking of prestige, market power, wealth and age common to several countries (Marginson, 1999; Usher, 2009, p. 89). In the Australian case, the apex is occupied by several of the ‘Sandstones’: universities established prior to World War I and now the wealthiest in terms of assets and research income (Marginson, 1999, pp. 18–20). These are joined in the highest rank by the three ‘Redbricks’, all established by later acts of parliament (Australian National in 1946; New South Wales in 1949 as the New South Wales University of Technology; and Monash in 1958). The association between date of establishment and world ranking of the Australian universities tends to lessen within the middle and lower ranks, though there continues to be a strong relationship between the wealth of the institutions and their placement in the system.

Critical Studies in Education

5

Table 2. World/national sociology rankings. Australian universities in the top 50

51–100

101–150

151–200

13. Australian National 20. Melbourne 25. Sydney 29. Monash 34. New South Wales

La Trobe Adelaide Queensland

Deakin Macquarie Newcastle

Flinders Griffith Tasmania

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Source: QS World University Rankings by Subject 2011.

Universities have also been ranked according to their ‘performance’ in each discipline by the Quacquarelli Symonds system, which shares a very similar methodology to that of the Times.4 The rankings for Australian universities according to their sociological output are shown in Table 2. When Tables 1 and 2 are compared, a general match is apparent between the world ranking of universities (according to The Times System) and the ranking of their sociological output (by the QS scheme), for many of the more prestigious and wealthy universities have a high sociology ranking as well. Where there is variation between the two tables (notably the exclusion of the University of Western Australia), it can largely be explained by the timing of the institutionalisation of sociology in Australia, whereby (apart from the University of Tasmania) sociology departments were not initially established in the early ‘Sandstones’, but the newer, more progressive universities. This historical trend is similar to the UK and the USA, where the discipline was not favoured in the elite universities of Cambridge or Oxford, nor the Ivy League universities of the USA (see Bulmer, 1985 for the account of Cambridge’s reluctance to accept sociology in the UK). In Australia, a department of sociology (combined with social work) was formed at a very late stage at the University of Sydney in 1991, but even today, independent, named departments remain missing from the Universities of Western Australia, Adelaide and Melbourne. The ‘Sandstones’ have recently begun to invest in sociology and the social sciences, but the interim period allowed some of the other universities, such as La Trobe, to develop a strength in the discipline. The study of sociological knowledge: method and approach The second empirical focus for this study is the publications of Australian sociologists of health and medicine. Evidence comes from a quantitative analysis of refereed articles, published since 1960, in the journals most likely to contain and represent the intellectual work of the sub-discipline. Table 3 shows the proportion of papers from each source and indicates an overall study population of 670 papers. Papers from these journals were selected where they could be regarded as refereed research articles. Book reviews and editorials were excluded. Codes were developed to capture each article as a ‘case’, with demographic details taken from the manuscript (e.g., author name, country and university affiliation) as well as manuscript content (e.g., method, use of sociological theory). These two sets of variables – demographic context and manuscript content – enable cross tabulation between the independent (e.g., institutional location) and dependent variables (e.g., use of theory). This method of context-content analysis has been employed effectively elsewhere to map the effect of institutional context on knowledge production (Collyer, 2009a, 2009b). The country designation of each paper is taken from the affiliation of the first author as provided on the manuscript. This offers an indicator of ‘professional citizenship’ rather

6

F. Collyer

Table 3. The study population – journals 1960–2011. Number of papers The Journals

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Health Sociology Review Sociology of Health and Illness Social Science and Medicine Journal of Sociology Australian and NZ Journal of Public Health TASA Conference Proceedings Australian Journal of Social Issues ANZSERCH Conference Proceedings Total (n = 670)

1960–1989

1990–2011

Total

− 5 72 70 19

− 2% 24% 23% 6%

207 30 16 57 8

57% 8% 4% 16% 2%

207 35 88 127 27

31% 5% 13% 19% 4%

− 127 11 304

− 42% 4% 100%

48 − − 366

13% − − 100%

48 127 11 670

7% 19% 2% 100%

Note: Totals may not add to 100% due to rounding. Nationality based on the country affiliation of the first author (as stated on the manuscript), and only the current names of the journals are provided in this table.

than personal nationality at the time of publication. A few of the more prolific authors appear on the database on more than one occasion but less than 5% were found to have shifted (either temporarily or permanently) to new institutions in new countries, indicating the general reliability of this variable as a country indicator. The relative strengths of the study’s method include its reliance on journal articles rather than books, as the former best reflect the majority of health sociology research (Willis, 1991, p. 49). While some have argued for the inclusion of books in any scoping study of sociology (e.g., Halpern & Anspach, 1993, p. 288), it should be pointed out that books and journals are written for different audiences and purposes and thus not directly comparable. A second strength is found in the manual, rather than computer-generated data of the study. Studies relying on computer-generated keyword analyses (e.g., Seale, 2008), or bibliometric databases (e.g., Arvanitis, Waast, & Gaillard, 2000), are less able to interrogate the contents of papers or evaluate scholarly contributions to the field. The method in this study, in contrast, relies on the careful reading and systematic coding of each article by a researcher with an appropriate familiarity with the field. Although this method introduces a subjective element of evaluation into the process, it is nevertheless more rigorous than review-based analyses containing personal selections of well-known texts (e.g., Willis, 1991). Finally, this study, with its alternative method of analysing evidence from the written manuscript, overcomes problems associated with questionnaires and the self-reporting of a participant’s academic practices (e.g., Connell et al., 2005). Coding was performed by one individual, and reliability (i.e., consistency of coding outcomes across the study population) was tested through the blind re-coding of a random selection of articles, and, where necessary, the re-building of codes and re-coding until conventional standards of reliability were achieved. Using test and re-test procedures, a comparison of coding outcomes for all variables discussed in this paper showed at least 80% reliability. The statistical program SPSS was used to record and analyse the data. Ethical clearances were unnecessary, but the financial support of the School of Social and Political Sciences at the University of Sydney is acknowledged and appreciated. The use of sociological theory Sociological theory is used for various purposes within sociological writings, as a tool to understand and explain social reality (Giddens, 1989, p. 17), a means to ‘import a

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Critical Studies in Education

7

moral and political framework’ (Harley, 2008, p. 299), and a strategy to gain market appeal and status by establishing an author’s allegiance or opposition relative to other traditions or perspectives (Harley, 2008, pp. 302–303). Likewise, the corpus of sociological theory has been acknowledged as a means to assist and integrate the discipline (Alexander, 1987) as well as provide ‘a symbolic focus, a shared language, and some kind of identity, for academics and students in sociology’ (Connell, 1997, p. 1544). Rather than investigate the role of theory in processes of disciplinary formation and professionalisation, a task undertaken elsewhere (e.g., Collyer, 2010; Connell, 1997), or even explore the individual theories constructed or used by sociologists of health, a matter also examined elsewhere (Collyer, 2009a; Lupton, 1993; Willis, 1991; Willis & Broom, 2004), this article investigates the potential for theory utilisation to act as a sensitive indicator of the influence of the academic market on the production of sociological knowledge. The extent to which the use of theory might be influenced by the relative prestige of the author’s university has not previously been investigated. In order for this relationship to be tested, papers in the study population were coded as ‘containing’ or ‘not containing’ sociological theory. These categories were not imposed through the researcher’s subjective assessment about the theoretical content of a given paper but determined by the presence or absence of the author’s explicit statements in each manuscript about their theoretical or conceptual framework. An example of a paper coded for the utilisation of sociological theory is Gillian Hatt’s (1996) study of low blood pressure. The author tells readers that it employs the theoretical framework of embodiment to reveal changes in the bodily representation of patients as they are diagnosed with this condition. Equally, Ann-Claire Larsen’s (1996, p. 116) paper is coded positively for theory, for it states that Foucault’s genealogical tools are applied to examine the Child Health Service as a ‘technology of governance and power relations that render populations visible’. A contrary example can be seen in Collyer (2007), where theorists are named in the article, and theory informs the qualitative analysis of a chronic shortage of expert medical staff, but there are no statements to show how theory has been used as a framing device for the study. As previously noted, theory is used for various purposes, and in everyday sociological practice there is little consensus over the use of such terms as theory, perspective, paradigm or concept, and conflation between these terms is common. Given that codes are built from statements within the papers themselves, papers said to contain sociological theory will necessarily include ‘grand’ theories attached to individuals (e.g., Michel Foucault), ‘middle range’ theories clustered around a topic or concept (e.g., embodiment, consumerism, education or the professions), and theories reflecting specific sociological traditions or epistemologies (e.g., functionalism, interpretivism). Many papers contain no explicit statements about sociological theory (265/670 or 45%). As can be seen in Table 4, a significant statistical relationship exists between papers employing sociological theory and the national/world ranking of their originating institutions. Contrary to expectations, the higher users of sociological theory are authors located in the less prestigious universities. Sociologists in the more prestigious institutions use proportionally less sociological theory. The relationship can be further explored using the rankings of Australian sociology departments. Table 5 indicates that where authors are employed within institutions where sociology has been ranked highly, less usage of sociological theory occurs. In other words, within the lower ranked institutions, 73% of the authors use sociological theory compared to only 45% of the authors in the more highly rated institutions.

8

F. Collyer

Table 4. The use of theory by overall institutional rank.

Highest rank Middle rank Lowest rank n = 592

Number of papers employing sociological theory

Number of papers not employing sociological theory

164 153 62

137 58 18

6% 73% 78%

46% 28% 23%

Totals 301 211 80

100% 100% 100%

Notes: Totals may not add to 100% due to rounding. Not all manuscripts provide details of institution, and not all authors are employed within universities. Hence n = 592. Data about university ranking is derived from The Times Higher Education World University Rankings. This association is statistically significant using Pearson’s chi-square χ 2 (1) = 24.797, df = 2, p < 0.000.

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Table 5. Use of theory by ranking of sociological performance.

High rank Low rank n = 592

Uses sociological theory

No use of theory

205 122

198 67

51% 65%

49% 35%

Total 403 189

100% 100%

Notes: Data about the university ranking of sociological performance in each institution is derived from The Times Higher Education World University Rankings in association with Quacquarelli Symonds. This association is statistically significant using Pearson’s chi-square χ 2 (1) = 9.740, df= 1, p < 0.002.

The inverse association between institutional ranking and the use of sociological theory is a recent one. When the timing of publications is taken into account, we find the association becoming evident (and statistically significant) only from the mid-1980s. Prior to this period, the ranking of the university bore no relationship to whether its sociologists explicitly utilised sociological theory.

The ranking and evaluation of universities and sociological output These unexpected findings deserve further analysis. We might begin with an examination of the university ranking systems, for while these have their proponents (e.g., Sheil, 2010), they also have critics (Saisana & D’Hombres, 2008; Usher, 2009). Ranking systems are claimed to provide measures of quality and performance, and though their methodologies have been adjusted over time, they consistently place the wealthier, long-established universities ahead of the less well-resourced and newer universities. This means all ranking systems tend to place the same national institutions at the apex but fail to agree on the middle and lower ranking universities. Alex Usher suggests that this is because they are not measuring what they are thought to be measuring, assessing only ‘inputs’ (such as institutional age, financial clout and size) and failing to pick up on what is ‘value-added’ (Usher, 2009, p. 89). The ranking of universities according to reputation and status enables the hierarchical order to be readily manipulated. For example, an important difference between The Times Higher Education and the Shanghai Jiao Tong University Institute rankings is that the former has elevated the status of the British and Australian universities relative to those of America, removing 21 American universities from the top 100 (Marginson, 2009,

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Critical Studies in Education

9

p. 26) and placing Australia third in the world for its universities after the USA and the UK (Marginson, 2007, p. 135). The major criticism of The Times system is its emphasis on reputational data gleaned through opinion surveys from academics and employers (a total of 50% of the weightings). Though The Times argues these surveys to be its strength – in comparison to other systems, which rely more heavily on publications-based data – the data is nevertheless problematic, for it is collected inconsistently across the sector (Usher, 2009, p. 89) and its surveys are not made public: thus it is unknown what questions are asked or who the respondents are (Marginson, 2007, p. 134). The ranking systems are not only biased towards measuring reputation and status rather than performance, but they also value some forms of performance more highly. Both the Shanghai Jiao Tong and The Times rankings rely on publication and citation data. This is more so in the case of the Shanghai Jiao Tong index, which bases its methodology on a weighting of 20% from citations in leading journals, 20% from publications in Science and Nature, 20% from a list of highly cited authors in Thomson ISI, 30% from Nobel prize winners in the fields of science and economics (not literature or Peace), and 10% from a staffing ratio. Thus the Shanghai Jiao Tong system privileges universities sizeable enough to have a strong performance in a large number of fields; those with proportionally high numbers of research-active rather than teaching-burdened staff; universities strong in the natural sciences (and also medicine, where there is a high number of publications per academic); universities where academics tend to cite academics only from their own university or country (as they do in America but not Australia, see Collyer, 2012, p. 233) and those from other English language universities (Amsler & Bolsmann, 2012, p. 287; Marginson, 2007, p. 133; Williams, 2008, pp. 52–53). After considerable criticism, the Shanghai Jiao Tong system was modified to reduce its bias towards the natural sciences, doubling the weighting for the social sciences to counteract their lower publication rates and reducing from 20 to 10% the weighting for individual performance. Nevertheless, the Shanghai Jiao Tong system continues to define performance in terms of the natural sciences, favours the American universities (Wildavsky, 2010, p. 114), and largely ignores Australian research output from the humanities and law (Williams, 2008, pp. 55–56). The Times system also relies on citation data for 20% of the institutional score but claims to counteract the research performance bias of the Shanghai Jiao Tong system with its qualitative data from academic and employer surveys for a combined total of 50%, plus an indicator representing the proportion of foreign students and staff (5% each), and staffstudent ratios as a proxy for teaching quality (20%). The Times system clearly incorporates a broader range of data sources, particularly peer review, in an effort to avoid penalising universities without a significant strength in the natural sciences (Wildavsky, 2010, p. 115). However, its surveys have been criticised for being unrepresentative, having low response rates, and including data about staff and student numbers without quality control, thus allowing for institutional manipulation (Williams, 2008, p. 56). Moreover, although data are now drawn from Elsevier’s Scopus data bank in preference to Thomson ISI, this simply shifts the bias from American journals to those of Europe with regard to research performance. Thus while the Shanghai Jiao Tong system primarily measures research performance, The Times system primarily measures reputation, and, as Marginson (2007, pp. 138–139) argues, recognises an institution’s contribution to maintaining aristocratic privilege and capacity to make money from foreign students. Both adhere to the blue-print of university ‘performance’ in terms of the Anglo-American tradition, and neither measure the quality of teaching, the institution’s achievements in professional or vocational

Downloaded by [Fran Collyer] at 01:39 24 April 2013

10

F. Collyer

education, or its contributions towards the community, towards democracy or the solving of social problems (Marginson, 2007, pp. 138–139). Of equal, or perhaps even greater, concern is the complete lack of attention paid to identifying or measuring achievements in scholarship. Such a move would necessitate the collection of data from individual disciplines and attention to the conventions and values of the disciplines themselves. Current systems ignore the diversity of research practices across the sector, combining data into whole-of-university rankings (Van Dyke, 2005, p. 106). These are composite indexes, bringing together data collected for different purposes to ascertain an ‘overall’ measure of quality and performance. In this sense, the university rankings are ideal types, analytical constructs with no empirical correspondence to existing institutions. As a consequence, a university at the apex of the ranking system is assumed to exhibit all the desirable characteristics of a quality institution, but ‘quality’ is narrowly defined in the ranking process: only certain aspects of research performance are taken into account, less desirable features are ignored, and its performance in the individual disciplines are not rated even though it might perform very poorly in many of these. Of course ranking systems have been developed for individual disciplines, and our data (displayed in Tables 1 and 2) show a similarity between the universities ranked highly and those with highly ranked sociology departments. The question therefore remains: why do highly ranked universities and sociology departments in the Australian context utilise less sociological theory in their research publications? The empirical study of publications in the field of the sociology of health and medicine provides at least one answer to this question. Method in sociology: qualitative or quantitative? Method and methodology have long been considered a source of tension and division within the sociological community. Leonard Pearlin (1992), while refraining from suggesting a polarisation of sociology into two distinct groups, nevertheless discusses this matter in terms of two orientations towards sociological work: ‘meaning’ versus ‘structure seeking’. Both quantitative and qualitative forms of sociology are conducted in the Australian context, with a tendency for the latter to be more common. This compares with Britain, where there is extremely little quantitative sociology, and the USA, where the quantitative method is dominant (Collyer, 2012, p. 219). In this study, papers based on empirical work were coded into two groups: quantitative and mixed method, versus qualitative only. The process of coding did not take into account methodological issues due to the difficulty of constructing an objective measure to distinguish the two groups. Instead papers were assigned into a first category where they contained tables and statistical calculations or a second where they offered only textual analysis. Results indicate a significant statistical association between method and institutional rank, with authors from the more prestigious institutions more likely to engage in quantitative analysis. In other words, 59% of papers in the highest-ranked institutions utilise quantitative methods, compared with 34% in the middle-ranking institutions and 29% in the lowest-ranked institutions. This association is shown in Table 6. A similar finding appears if we examine the association between method and the ranking of sociological output. The results of the analysis are shown in Table 7, where 51% of the papers from universities with a highly ranked sociological output contain quantitative methods, compared with only 37% from the institutions where sociology is not ranked highly. The finding that publications from highly ranked universities, and those from institutions with high scores in sociology, are more likely to contain quantitative, rather than

Critical Studies in Education

11

Table 6. The use of method and overall institutional rank. Quantitative or mixed method Highest rank Middle rank Lowest rank n =344

105 40 14

Qualitative only

59% 34% 29%

73 77 35

41% 66% 71%

Totals 178 117 49

100% 100% 100%

Notes: Studies not based on empirical work are excluded from this analysis. Data about university ranking is derived from The Times Higher Education World University Rankings. The association is statistically significant using Pearson’s chi-square χ 2 (1) = 24.629, df = 2, p < 0.001. Table 7. Method by institutional ranking of sociological performance.

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Quantitative or mixed method High rank Low rank n = 344

117 42

51% 37%

Qualitative only 112 73

49% 64%

Total 229 115

100% 100%

Notes: Totals may not add to 100% due to rounding. Data about university ranking is derived from The Times Higher Education World University Rankings in association with Quacquarelli Symonds. The association is statistically significant using Pearson’s chi-square χ 2 (1) = 6.538, df = 1, p < 0.011.

qualitative methods, helps to explain their place in the ranking system. Given the bias of the system towards the sciences rather than the humanities or social sciences, institutions with groups of sociologists producing quantitative work – which has significantly higher citation rates – have an advantage. In Australia, the major source of quantitative sociology from 1960 until the late 1980s was the Australian National University, with the University of Queensland establishing an interest in this method from the late 1970s and becoming the dominant institution with regard to quantitative sociology from the 1990s. Universities with a somewhat lower level of representation in the field of quantitative methods have been the University of New South Wales (from the late 1970s and in the early 1980s), the University of Sydney (throughout the 1980s and during the 2000s), and La Trobe (mainly in the late 1980s and early 2000s). In general, institutional ‘strengths’ in quantitative methods, which contribute towards a favourable ranking, appear incompatible with the high utilisation of sociological theory. This relationship has been demonstrated empirically (Collyer, 2012). Why might studies based on the quantitative, numerical method, and those with less emphasis on theory, be more attractive propositions within the new market-driven system of higher education? The answer to this question lies in the association of quantitative method with the natural sciences and the ties of the latter to research funding policies. These policies and processes are underpinned by an ideology of scientism, where primacy and legitimacy are provided to the knowledge of the natural sciences and its methodologies. Certainly this has not always been the case. Dispute over method reaches at least as far back as the closing years of the nineteenth century and the intense competition between emerging disciplines (including sociology, biology and psychology) in both Europe and America. Prestigious proponents, such as Max Weber and Wilhelm Dilthey, fought for the independence of the ‘moral sciences’ and their historical-comparative and interpretive approaches; yet statistical, quantitative methods nevertheless gained ground as the meaning of science itself was re-defined and narrowed to refer to practices associated with the experimental,

Downloaded by [Fran Collyer] at 01:39 24 April 2013

12

F. Collyer

laboratory method (Collyer, 2010, p. 96; Ilerbaig, 1999; Oliver, 1983, p. 526). Emphasising the scientific basis of statistical, numerical methods enabled new disciplines to demonstrate their distinctiveness, and individual universities – such as Columbia – were able to gain a competitive advantage (Camic & Xie, 1994, pp. 776–777, 781). Moreover, in a period of uncertainty, war and industrial unrest for the developed nations, those disciplines and universities, aligned with the new investigative techniques, were able to offer their services to nation states and corporate players to assist in the broader effort to manage, order and organise industrial society (Sturdy & Cooter, 1998, pp. 423, 448). Over the course of the twentieth century, as an ‘engineering conception of science’ became increasingly dominant (Ross & Porter, 2003, p. 219), quantitative, numerical methods were established as the model for legitimate social research. In the contemporary context, one of the ways universities can secure market position is through support for the natural sciences and other disciplines with similar research practices (including the use of large research teams to ensure high citation rates per staff member). Such strategies are only successful when combined with ranking systems based on ‘market-derived models of organisation’ that ignore all ‘outcomes’ not easily measured or quantified, and where scholarly pursuits not directly generating revenue are eliminated or kept to a minimum (Lafferty & Fleming, 2000, p. 265). In this context, the humanities and critical disciplines, with their often individualised approach to scholarship, and ‘outputs’ which are meaningful in non-market ways (e.g., contributions toward public sociology), are at a distinct disadvantage. After all, it is a system that emphasises quantification, evaluation and standardisation. How might support for the natural science model of research bring financial success to the wealthier institutions? One reason can be found in their greater capacity to support the resource-intensive work of teams of researchers in funding bids for large-scale, complex – and expensive – research programs. In Australia there are few funding agencies able to provide sizeable grants, and each year these are largely awarded to researchers located within the wealthier institutions. In 2012, for example, of the 732 Australian Research Council Discovery Grants awarded for the years 2013 to 2015, 68% went to researchers located at the eight wealthiest universities (i.e., universities in the highest rank), and a further 19% to those in second tier universities (universities in the middle rank), leaving only 13% to be spread across the remaining half of the sector. Likewise, of the 13 National Health and Medical Research Council Program grants announced in December 2012 (the smallest of which was $AUD1,085,571), none were awarded to researchers at either middle or lower ranked universities. As we have seen from this small study, universities across the sector all respond to the market but do so according to their position within the institutional hierarchy, each imitating the strategies of other universities of similar size and status. This reduces both diversity and risk, for if the competitor’s strategies fail, one’s relative position will remain unchanged (Marginson, 1999, p. 16). Diversity in the Australian university sector is thus largely limited to size and wealth, the pay and work conditions for staff (Lafferty & Fleming, 2000, p. 263), the composition of their student bodies and student catchment areas (Marginson, 1999, p. 22), and increasingly their sources of revenue. Wealthier institutions have a far smaller proportion of their operating budgets provided by the state and thus greater independence from national policies. These institutions are therefore more firmly integrated into the extra-national ‘circuits of knowledge’ (Kauppinen, 2013, p. 8), their members sharing the interests of other elites within the global academic community, and are relatively more responsive to the standards and pressures of the global knowledge market. These are the

Critical Studies in Education

13

universities likely to adopt the natural science model of research. The less wealthy institutions, with less access to private resources and consequently more tightly tethered to the restrictions of the national context, adopt alternative market strategies (e.g., seeking out students who are part-time or offering distance education programs). Importantly, given the group’s exclusion from earning significant competitive research income (Marginson, 1999, p. 18), universities in the lowest ranking, paradoxically, have relatively greater freedom to pursue the building of theoretical, disciplinary knowledge: particularly, as these activities can be achieved with small levels of funding.

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Conclusions and reflections If the results of this study can be generalised to all areas of sociological practice – and there is no reason to suppose health sociology is unique in its high regard for theory or methodological pluralism – and to other countries, these findings raise concerns about the future of disciplinary knowledge and practice. They suggest institutions, and the academics working within them, have adapted to the increasing marketisation of the sector by altering the form and content of scholarly knowledge and down-grading its unique strengths. Where methodological pluralism and the use of theory were once promoted as ‘core strengths’ and defining features of the discipline, these are now at odds with the widely adopted, external systems of evaluation. Pressures to increase the number of papers published in the major American and European journals, to raise citation rates and attract external sources of funding, may have resulted in a greater presence of Australian universities within the world ranking systems, but appear to have achieved this to the detriment of disciplinary knowledge. And disciplinary knowledge is worth protecting. Disciplines structure and guide intellectual endeavour, and they impose theoretical and methodological rules but also offer opportunities and pathways to new knowledge. Disciplines are also the social spaces within which academic communities are protected and re-generated. Where markets are allowed to replace disciplines, dictate the rules and pathways of intellectual life and determine institutional practices, we are likely to see an entrenchment of the social inequalities of the global knowledge system and rapid growth in the production of ‘profitable’, less critical, less theorised knowledges. Current debates about university ranking systems focus on the need for improvement to ensure these measure ‘output’ and ‘performance’ rather than simply prestige and status. Efforts to expand the sources of data for these systems to eliminate the bias toward the sciences and the Anglo-American university sector are welcomed. However, findings from this study have demonstrated the need to challenge and problematise the system of ranking itself and consider the broad variety of work undertaken in diverse disciplines across the university sector – including efforts towards scholarship – when universities are under evaluation.

Notes 1.

2.

Colleges of Advanced Education were a creation of the Menzies government. Initially Teachers Colleges, they eventually broadened their offerings to award diplomas and certificates in subjects such as pharmacy. Institutes of Technology were also vocationally oriented institutions, focusing on the applied sciences, electronics and technology. The Shanghai Jiao Tong University Academic Ranking of World Universities can be found at http://www.shanghairanking.com/FieldSOC2011.html.

14 3. 4.

F. Collyer The Times Higher Education rankings can be found at http:www.topuniversities.com/universityrankings. The methodology for the Quacquarelli Symonds rankings can be found at: http://www. topuniversities.com/world-university-rankings/understanding-qs-world-university-rankingsmethodology. The QS World University Rankings system collaborated with The Times until 2010, at which point they parted with considerable animosity. Both offer essentially the same methodology, ostensibly measuring performance according to subject area.

Notes on contributor

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Fran Collyer is a sociologist at the University of Sydney, National Convenor of the Health Section of The Australian Sociological Association, a member of the Health Governance Network, and former editor of the Health Sociology Review. Fran publishes in the fields of the sociology of science, technology, history and health and is co-author of Public Enterprise Divestment: Australian Case Studies (2001) and author of Mapping the Sociology of Health and Medicine (2012).

References Albert, M. (2003). Universities and the market economy. Higher Education, 45, 147–182. Alexander, J.C. (1987). The centrality of the classics. In A. Giddens & J.H. Turner (Eds.), Social theory today (pp. 11–57). Cambridge: Polity Press. Amsler, S.S., & Bolsmann, C. (2012). University ranking as social exclusion. British Journal of Sociology of Education, 33, 283–301. Arvanitis, R., Waast, R., & Gaillard, J. (2000). Science in Africa: A bibliometric panorama using PASCAL database. Scientometrics, 47, 457–473. Bulmer, M. (1985). The development of sociology and of empirical social research in Britain. In M. Bulmer (Ed.), Essays on the history of British sociological research (pp. 3–35). Cambridge: Cambridge University Press. Camic, C., & Xie, Y. (1994). The statistical turn in American social science. American Sociological Review, 59, 773–805. Collyer, F.M. (2007). A sociological approach to workforce shortages. Health Sociology Review, 16(3–4), 248–262. Collyer, F.M. (2009a). The rise and fall of theoretical pradigms in health and medical sociology: 1960-2009. In S. Lockie, D. Bissell, A. Greig, M. Hynes, D. Marsh, L. Saha, . . . D. Woodman (Eds.), The future of sociology: 2009 TASA refereed conference proceedings, Australian National University, Canberra, December 1–4, 2009. Canberra: TASA. Retrieved from http://www.tasa.org.au/conferences/conferencepapers09/ Collyer, F.M. (2009b). Work environments: Their impact on theorising in the sociology of health, illness and medicine. In S. Lockie, D. Bissell, A. Greig, M. Hynes, D. Marsh, L. Saha, . . . D. Woodman (Eds.), The future of sociology: 2009 TASA refereed conference proceedings, Australian National University, Canberra, December 1–4, 2009. Canberra: TASA. Retrieved from http://www.tasa.org.au/conferences/conferencepapers09/ Collyer, F.M. (2010). Origins and canons: Medicine and the history of sociology. History of the Human Sciences, 23(2), 86–108. Collyer, F.M. (2012). Mapping the sociology of health and medicine: America, Britain and Australia compared. Basingstoke: Palgrave Macmillan. Connell, R.W. (1997). Why is classical theory classical? American Journal of Sociology, 102, 1511–1557. Connell, R.W., Wood, J., & Crawford, J. (2005). The global connections of intellectual workers. International Sociology, 20, 5–26. Drahos, P., & Braithwaite, J. (2002). Information feudalism. New York: The New Press. Giddens, A. (1989). Sociology. Chicago, IL: University of Chicago Press. Halpern, S., & Anspach, R. (1993). The study of medical institutions. Work and Occupations, 20, 279–295. Harley, K. (2008). Theory use in introductory sociology textbooks. Current Sociology, 56, 289–306. Hatt, G. (1996). “Feeling low”: The emergence of a concept of low blood pressure and the representation of an embodied identity. Annual Review of Health Social Sciences, 6, 57–80.

Downloaded by [Fran Collyer] at 01:39 24 April 2013

Critical Studies in Education

15

Ilerbaig, J. (1999). Allied sciences and fundamental problems. Journal of the History of Biology, 32, 439–469. Kauppinen, I. (2013). Academic capitalism and the informational fraction of the transnational class. Globalisation, Societies and Education, 11, 1–22. doi:10.1080/14767724.2012.678763 Kayrooz, C., Kinnear, P., & Preston, P. (2001). Academic freedom and commercialisation of Australian universities. Discussion Paper No. 37. Canberra: Australia Institute, Australian National University. Lafferty, G., & Fleming, J. (2000). The restructuring of academic work in Australia. British Journal of Sociology of Education, 21, 257–267. Larsen, A.-C. (1996). The child health service. Annual Review of Health Social Sciences, 6, 113–137. Lupton, D. (1993). Is there life after Foucault? Australian Journal of Public Health, 17, 298–300. Marginson, S. (1999). Diversity and convergence in Australian higher education. Australian Universities’ Review, 42, 12–23. Marginson, S. (2007). Global university rankings: Implications in general and for Australia. Journal of Higher Education Policy and Management, 29, 131–142. Marginson, S. (2009). Open source knowledge and university rankings. Thesis Eleven, 96, 9–39. Marginson, S. (2010). Higher education in the global knowledge economy. Procedia Social and Behavioral Sciences, 2, 6962–6980. Marginson, S., & van der Wende, M. (2007). To rank or to be ranked: The impact of global rankings in higher education. Journal of Studies in International Education, 11, 306–329. Moutsios, S. (2010). Power, politics and transnational policy-making in education. Globalisation, Societies and Education, 8, 121–141. Oliver, I. (1983). The ‘old’ and the ‘new’ hermeneutic in sociological theory. The British Journal of Sociology, 34, 519–553. Pearlin, L.I. (1992). Structure and meaning in medical sociology. Journal of Health and Social Behavior, 33, 1–9. Rhoades, G., & Slaughter, S. (1998). Academic capitalism, managed professionals, and supply-side higher education. In R. Martin (Ed.), Chalk lines: The politics of work in the managed university (pp. 33–68). Durham: Duke University Press. Robinson, W.I. (2004). A theory of global capitalism. Baltimore, MD: Johns Hopkins University Press. Ross, D., & Porter, T. (2003). Changing contours of social science disciplines. In T. Porter & D. Ross (Eds.), The Cambridge history of science (Vol. 7, pp. 205–237). Cambridge: Cambridge University Press. Saisana, M., & D’Hombres, B. (2008). Higher education rankings. (JRC Scientific and Technical Reports, 23487 EN.) Italy: European Commission. Seale, C. (2008). Mapping the field of medical sociology. Sociology of Health and Illness, 30, 677–695. Sheil, T. (2010). Moving beyond university rankings. Australian Universities’ Review, 52, 69–76. Sklair, L. (2002). Globalization. Oxford: Oxford University Press. Slaughter, S., & Leslie, L. (1997). Academic capitalism. Baltimore, MD: The Johns Hopkins Press. Slaughter, S., & Rhoades, G. (2004). Academic capitalism and the new economy. Baltimore, MD: The Johns Hopkins Press. Sturdy, S., & Cooter, R. (1998). Science, scientific management, and the transformation of medicine in Britain c.1870–1950. History of Science, 36, 421–466. Usher, A. (2009). University rankings 2.0. Australian Universities’ Review, 51, 87–90. Van Dyke, N. (2005). Twenty years of university report cards. Higher Education in Europe, 30, 103–124. Wildavsky, B. (2010). The great brain race. Princeton, NJ: Princeton University Press. Williams, R. (2008). Methodology, meaning and usefulness of rankings. Australian Universities’ Review, 50, 51–58. Williams, R. (2010). Research output of Australian universities: Are the newer institutions catching up? Australian Universities’ Review, 52, 32–36. Willis, E. (1991). The sociology of health and illness in Australia. Annual Review of Health Social Science, 1, 46–53. Willis, E., & Broom, A. (2004). State of the art. Health Sociology Review, 13, 122–144.

Suggest Documents