Jul 21, 2014

‘Universities Will Be Forced To Engage’ – The Rankings Game

Contributing Editor Jack Leahy examines the deep impact that international university ranking systems, despite their inadequacies, can have on third level institutions.

Jack Leahy | Contributing Editor

Global university rankings are an increasingly competitive business – not only for the participating universities, but for the companies that research and publish them. While few can agree on which of the many methodologies are the most robust, there is consensus that global university rankings are here to stay and that their influence is growing. Universities and students are taking note and action.

The first international benchmarking exercise, the Shanghai Rankings, were published in 2003 by Shanghai Jiao Tong University in China. These were used initially to benchmark Chinese universities in an international market for higher education following the launch of a government initiative to establish Chinese universities and world-leaders in educational excellence.

ADVERTISEMENT

The increased mobility of students since the dawn of the first exercise in rankings has made direct competitors of universities thousands of miles apart. The global profile of the major annual rankings studies – QS, Times Higher Education, and the original Shanghai – compels universities to measure to improve their global competitiveness according to the metrics employed. Research output and impact, measurable by citation, is highly valued while more intangible and emotional factors like student experience and teaching quality are generally given a lower weighting.

Exercises like this are important by the simple fact of their existence; they are an unavoidable fact of life for universities playing the global game.

When interviewed by The University Times in September 2012, then Minister for Education & Skills Ruairí Quinn pleaded for calm as Irish universities recorded a third straight year of alarming performance plummet. The veteran Dublin South-East representative claimed that Ireland’s reputational damage arising from the economic crisis was a major factor in decreased competitiveness and that the quality of education available in Ireland had not been significantly downgraded. He called for a (since unrealised) concerted attempt to quantify teaching quality and educational outcomes for rankings purposes during Ireland’s EU presidency.

Quinn’s plea to understand rankings research and its biases seems sensible when many OECD countries are revising investment strategies to pander to the whims and vicissitudes of studies of contested methodology. At the same time, exercises like this are important by the simple fact of their existence; they are an unavoidable fact of life for universities playing the global game.

Trinity’s Global Relations office is by far Ireland’s most sophisticated and well-resourced – in terms of both finance and expertise – but its initial capital investment will prove prohibitive to many aspirations.

The importance of rankings in an Irish context has been emphasised in recent years of decreased state funding for third-level. As universities strategically globalise their admissions in pursuit of vastly more sizeable non-EU fee contributions, they enter competition for highly mobile learner with dozens of exceptional alternatives. Without an impressive ranking on a global stage, recruitment competitiveness is hindered from the outset.

The increasing disadvantage created by falling rankings is damaging even before you consider the extensive global recruitment infrastructure required to run an operation of the necessary scale. Trinity’s Global Relations office is by far Ireland’s most sophisticated and well-resourced – in terms of both finance and expertise – but its initial capital investment will prove prohibitive to many aspirations.

In Trinity, ranking announcements are met with a public caution, often emphasising the kind of methodological concerns raised by the former minister. At a policy level, however, performance in international benchmarking exercises is highly valued and one of the core subjects of major policy considerations.

Furthermore, the College’s preferences for a particular format of rankings criteria are clear. U-Multirank, a new system announced this year considering broader criteria such as artistic output, was hardly addressed at all, whereas the QS rankings launch was held in TCD as recently as 2012 and results are often discussed at Board level. The 2013 results, which showed a slight rise in rankings, were celebrated mutedly as a sign of steadiness despite strife.

Despite experiencing a generally south-bound trajectory in the major rankings, Trinity is a beneficiary of a system that upholds and celebrates models of education that are traditional in their constitution. QS, for examples, assesses Academic Reputation (40%), Employer Reputation (10%), Student-to-Faculty Ratio (20%), Citations per Faculty (20%), International Faculty Ratio (5%) & International Student Ratio (5%).

That reputational factors constitute more than half of the judgment on a university’s global competitiveness is problematic and inherently favours older, historical universities like Trinity. Comparative data is a limited tool for comparing universities at the best of times but that so intangible a criteria can be so heavily weighted indicates a caveat.

Seemingly gaining authority and influence, the rankings are here to stay – and universities will be forced to engage in order to maintain a desirable international competitiveness.

Furthermore, most major rankings studies make no effort to quantify teaching excellence, and by focusing exclusively on full-time students excludes many larger universities across the world where part-time students are much more common. Research output is the bread and butter of traditionally academic, but the growing popularity of service-learning to meet economic trends would suggest a greater focus on instruction would be appropriate.

The London School of Economics (LSE) states on its website that ‘we remain concerned that all of the global rankings – by some way the most important for us, given our highly international orientation – suffer from inbuilt biases in favour of multi-faculty universities with full STEM (Science, Technology, Engineering and Mathematics) offerings and against small, specialist, mainly non-STEM universities such as LSE’. Teaching subjects mainly in the area of humanities and social sciences, LSE is misrepresented reputationally by studies that favour sciences.

Methodological concerns have not dampened the extent to which funding institutions and prospective students have consumed university rankings. Seemingly gaining authority and influence, the rankings are here to stay – and universities will be forced to engage in order to maintain a desirable international competitiveness. The system needs to be embraced and understood – only then can it be bettered and more broadly representative.

Sign Up to Our Weekly Newsletters

Get The University Times into your inbox twice a week.