allowing the classification and eventual ranking, in terms of the degree of achievement of ... As an example, consider the factors used in the UK [20] for University ...
Classification and Ranking of Higher Engineering Education Programmes and Institutions: the IST View I. C. Teixeira 1, J. P. Teixeira 1, M. Pile 2, and D. Durão3 Elect. Eng. and Computer Science Dept., IST, 2GEP, IST, 3President, IST IST (Instituto Superior Técnico), Universidade Técnica de Lisboa, Av. Rovisco Pais, 1096 Lisboa Codex, Portugal
1
ABSTRACT
Ranking is a polemic concept within Higher Education Institutions. The polemic nature of this topic, and the dangers of incorrect conclusions resulting from the straightforward analysis of figures can be somehow overcome when the different universes of users can access the adequate information. In fact, Quality has different meaning for students, employers, owners (public, and/or private), and other financial supporters. The main purpose of this paper is to provide a new approach to handle this problem, based on the definition of different metrics of Quality to the different universes of users involved in the educational processes. Additionally, methodologies for supporting evaluation, classification and ranking of programmes and Institutions are presented.
1. Introduction
Higher Engineering Education Institutions (HEEIs) face today conflicting requirements. On one side, they face global competitiveness as the result of highly skilled human mobility in large economical regions (both staff and students). On the other side, they are pushed to accept increasing numbers of candidates, decreasing budgets (especially, decreasing public funding), increasing costs on using new educational technologies and multimedia equipment. Additionally, rapidly changing environment, strong networking demand, interdisciplinary of educational curricula, formal and continuous education are also items of a list of challenges that seems endless. Under these circumstances, it is not strange that users of HEEIs and their engineering programmes (both students and employers) having a large, available offer, as well as Institutions providing financial support, need accurate Quality measures and indicators to drive their choices. Since it may have impact on HEEI population and eventually on financial support, programme and Institutions ranking [1,3] is a controversial topic. Some agree, some disagree, advantages and drawbacks are pointed out by both sides. As any important issue, it triggers discussion. In fact, liking it or not, either explicitly or implicitly, ranking is unavoidable when figures and conclusions coming out from the self-evaluation processes carried out by almost every school enter the limelight by the hand of the media. Anyway, it is the result of living in an information society that privileges Quality [111]. However, the use of uniform (and ambiguous) criteria to provide information for different cultural and economical environments, necessarily results in misleading conclusions, with significant, hurtful consequences to users and HEEIs image. The purpose of this paper is to introduce a classification and ranking methodology, currently under development and implementation at Instituto Superior Técnico (IST), the School of Engineering of
the Technical University of Lisboa (UTL) [4,12,13,16]. A new model is introduced, parametrizable for five different sets of users. For each universe of users, a set of objectives they value in their interface with the HEEI, is defined. Key factors of success are quantified by means of IoS (Indicators of Success), usually correlated. Factors of success are further partitioned into endogenous (e.g, the performance of the educational process) and exogenous (e.g., public funding, or regional context where the HEEI operates) factors. Each set of users, j, attributes weighting factors, pij, to the IoS which are valuable to it, thus allowing to built a Valorisation Matrix and Classification Functions, CFj = f (IoSi pij). Classification criteria consider scientific, pedagogic, administrative [17] and socio-economical aspects. In order to allow a fair ranking, the IoS need to be meaningful to the different universes to which they are applied, particularly, those belonging to the EU [10,14,15].
2. Methodology The main objective of the proposed classification methodology is to provide indicators that allow the different actors that play any role in the educational processes to assess the performance and quality of programmes and institutions. The first step of the methodology is the characterisation, in terms of their IoS, of the five different universes mainly concerned with the education process: • students • employers • owners (public, and/or private) • other financial supporters (e.g., the EU or national R&D sponsors), and • customers (which may contract services, e.g, for advanced Lab facilities) Each one of these universes of users has its own objectives, that can be mapped into the set of educational process characteristics dominantly valued by those users. Once the objectives are stated, it is possible to define the corresponding classification factors, or Classification Function Factors - CFF, that can be quantified by classification indicators, or Indicators of Success - IoS [6-9]. The assessment can therefore be carried out in terms of pre-defined objectives, therefore allowing the classification and eventual ranking, in terms of the degree of achievement of the pre-defined objectives. As illustrative examples, tables 1 and 2 summarise the dominantly valued characteristics for measuring the quality of an institution, according to the point of view of the students in French Universities [18] and as it is presented in a Portuguese magazine, EXAME [19], in connection with the evaluation of Economics and Management Programmes in Portugal.
Classification Function Factors (point of view of the
teaching environment university status
students)
professional outputs
Table 1: Critical Factors for students in French Universities [18].
Classification Function Factors (point of view of EXAME)
entry point students choice staff qualification faculty image
Table 2: Ranking criteria, according to EXAME, for the Economics and Management Programmes in Portuguese universities [19]. In fig. 1, a graphic representation of the methodology is depicted.
User’s universes (j = 1, 2, … 5)
Objectives
CFF
nuclear differenciator endogeneous exogeneous
Classification Functions (CFs) IoS User’s universes (j = 1, 2, … 5)
IoS
pij
Fidelization Functions (FFs)
CFj (IoSi , pij)
Valorization matrix
FFj (IoSi , pij, d IoSi/dt, d p ij /dt)
Fig. 1: Classification (FC) and Fidelization (FF) evaluated as functions of weighted IoS The methodology is applicable to the assessment of Institutions and programmes in diverse areas of knowledge. We achieve this goal by separating nuclear factors, that are common to the different scientific areas, and differentiator factors that take into account the specificity of each scientific
area, as well as the different weights attributed by each universe of users. Moreover, we differentiate endogenous and exogenous factors. A factor is referred as endogenous if it is a characteristic of the institution and therefore can be influenced by it. An example of an endogenous factor, is the performance of an educational process, that can be quantified in terms of the completion rates and the final classifications. Exogenous factors do not depend, or depend very weakly, on the institution under assessment. An example of an exogenous factor for a public institution is the public budget, or its geographic location. Another aspect that is considered in the evaluation of Quality of an Institution is its capability to retain its clients, that is, the universe of users (either undergraduate or graduate in continuing education processes). This aspect is contemplated in Fidelization Functions (FF). FF depend, not only on the different IoS and their relative weights, but also on the trends observed in the weights and eventually on the classification functions aggregating different factors. The proposed methodology allows the parametric ranking of programmes and institutions. Each IoS is weighted differently by the different universes. A valorisationt matrix is constructed, in which each entry, pij, describes the weight attributed by universe j to indicator IoSi. The use of weighted factors provides links among classification factors and programmes and/or institutions ranking. In fact, it allows the definition of Classification Functions (CF), for each universe j, that map the relative importance attributed by the users of that universe to the different IoS (Fig.1). The different weights attributed by different universes of users are exemplified in fig. 2. These data is concerned with the application of this methodology to IST. It can be observed in this figure that the “social” factor is highly valued by the students; however, it is of limited importance to the employers. Quality is important to all the universe of users. Therefore, the IoS must be carefully chosen, according to the objective to be achieved and may be updated. Also, the weights can be tuned, dynamically, according to the experience and changing preferences of the users. For instance, IoS defined for comparison purposes, that is, for ranking, must be meaningful for all the universes under scrutiny. As an example, consider the factors used in the UK [20] for University ranking purposes as indicated in table 3. The factor “entry point” included in table 3 is also included in table 2. However, while for British Universities, this factor is quantified in terms of “percentage of A level applicants”, for other countries, such as Portugal, the existence of a “numerus clausus” deforms the process. In the latter case, the entry point is a result, among other factors, of the number of positions available in the different universities, and the ranking of the applicant by final classification.
Internationalization Risponsability On-time response capability Flexibility in providing services Costs Quality Social-
Sponsers
R&D Added Value
Owners
Sist. Performance
Employers
Staff Qualification
Students
Ratio E/D
0
2
4
6
8
10
12
14
16
18
20
Fig. 2: Relative importance of different IoS for different universes of users Classification factors
Indicators
1. entry point 2. student/staff ratios 3. R&D, consulting income 4. staff qualification 5. staff qualification 6. library spending 7. student (S) accommodation 8. completion rates 9. firsts 10. research ratings 11. value added 12. graduate students 13. employment (within 6 months of graduation)
% of A level students equivalent full-time teaching income / full-time staff % # Ph.D. / # staff % professional % budget % S accommodated in the campus % S undertaking a qualification % of first-class degrees independent assessment of R&D function of 1, 8 e 13 % S in graduate studies % graduates in permanent employment % non-employed graduates % graduates going on R&D % international S
14. international students
Key factors of success # (applicants) # (performance)
# (performance) # (image)
Table 3: Classification criteria and IoS in British Universities [20]. 3. Conclusions Polemic as it is, the classification and ranking of programmes and Higher Education Institutions are unavoidable in the new context of teaching and the new markets within the EU. Correct procedures
for carrying out this classification and ranking require the clear definition of objectives, factors and indicators of success that are meaningful for the whole universe of users, and the possibility of weighting differently the different factors. In this paper, a methodology has been presented in which indicators of success are weighted according to their importance for the different universes of HEEI users, in order to allow correct definitions of classification and Fidelization functions and ranking. References [1]
Brennan, J., Goedegebuure L.C.J., Shah, T., Westerbeijden, D.F., Weusthof, P.J.M., 1992: "Towards a Methodology for Comparative Quality Assessment in European Higher Education: A Pilot Study on Economics in Germany, The Netherlands and the U.K.", (London: CANN).
[2]
Goedegebuure, L.C.J., F. Kaiser, P.A.M. Maassen, V.L. Meek, F.A. van Vught & E. De Weert, 1992: "Higher Education in International Comparative Perspective", (Enschede, CHEPS).
[3]
ABET (Accreditation Board for Engineering and Technology), 1992-93: “Criteria for Accrediting Programs in Engineering in the United States”, Engineering Accreditation Commission.
[4]
1995: “Avaliação do Desempenho das Universidades”, Seminário organizado pela Fundação das Universidades Portuguesas (in Portuguese).
[5]
Birnbaum, R., 1989: "The Quality Club: How College Presidents Assess Quality, in: Quality in he Academic", Proceedings from a National Symposium, National Center for Governance and Finance, University of Maryland, USA.
[6] [7]
Cave, M., et al., 1988: "The Use of Performance Indicators in High Education ", (London: Kingsley).
Johnes, J. & Taylor, J., 1990: "Performance Indicators in Higher Education: UK Universities" (Buckingham: Open University / SRHE).
[8]
Link, R.D., 1992: “Some Principles for Application of Performance Indicators in Higher Education”, Higher Education Management, 4: 194 - 203.
[9]
Lucier, P., 1992: “Performance Indicators in Higher Education: Lowering the Tension of the Debate”, Higher Education Management, 4: 204 - 214.
[10]
Ritcher, R., 1992: “Some Remarks on the Development of Quality Assessment Procedures in German Higher Education”, Fourth International Conference on Assessing Quality in Higher Education, Enschede, 28 - 30 July.
[11]
F.A. Van Vught, D.F. Westerheiden, 1993: "Quality Management and Quality Assurance in Europe Higher Education: Methods and Mechanisms", (Luxembourg: Office for Official Publications of the EEC).
[12]
A. Dente, I. Lourtie, I. Ribeiro, I. Teixeira, M. Piedade, J.P. Teixeira, 1994: “Metodologia para Avaliação de Licenciaturas: Experiência Piloto no IST”, Congresso da Ordem dos Engenheiros, Com. 109, tema Geral I (in Portuguese).
[13]
1995: “Assessment of the Programme (Licenciatura) on Mechanical Engineering at Instituto Superior Técnico, Portugal”, European Pilot Project for Evaluating Quality in Higher Education, Final Report.
[14]
1992: "Quality Assessment in European Higher Education: A Report on Methods and Mechanisms, and Policy Recommendations to the European Community", Liaison Committee of Rectors' Conferences, Report, contract nr. 92-401-ETU-041/BE.
[15]
1995: “Programme for the Institutional Assessment of University Standards”, Consejo de Universidades, Ministério de Educación y Ciencia, Espanha.
[16]
M.Pile, J.P. Teixeira, I.C. Teixeira, D. Durão, 1995: "Quality in Engineering Education at IST: Strategy, Implementation and Trends", Conf. on Quality in Engineering Education (CESAER), Prague.
[17]
Middaugh, M.F. & D.E. Hollowel, “Developing Appropriate Measures of Academic and Administrative Productivity as Budget Support Data for Allocation Decisions”, Higher Education Management, 4: 164 - 178, 1992.
[18] [19] [20]
1992: “Enquête: Universités: le Palmarès des Étudiants”, Le Monde de l’Éducation,, nº. 195, pp. 26-48. D. Ribeiro, 1995: "Universidades: as Melhores do Ano", EXAME, pp. 28-30.
John O’Leary and Tom Cannon (ed.), 1993: “Good Universities Guide”, chapter 8, "University Rankings", Times Books.