Comment & Analysis
Oct 1, 2015

When You Compare Ranking Methodologies, You Realise their Disconnect from Students

Aisling Curtis compares the QS and Times Higher Education Rankings, and says that for students, they're not that important.

Aisling CurtisOpinion Editor
blank
Sergey Alifanov for The University Times

In terms of international rankings, Trinity has had a bad month: the university fell seven places to 78th in the QS World University Rankings and 22 places in the Times Higher Education World University Rankings, placing at 160th internationally.

For an institution that placed 76th on the same list in 2011, this reflects a significant and concerning drop. What is most striking, however, is the difference between Trinity’s placement in both sets of rankings: in the same month, Trinity’s place in the world has been pegged at very different points – the 78th best in the world and the 160th best in the world, over 80 places apart. And how close UCD comes to Trinity also varies significantly – from as close as 16 places in the Times rankings to over 75 in the QS ones.

So, what explains the difference?

ADVERTISEMENT

Following last year’s Times rankings, Daniel O’Brien analysed the five major categories that dictate where a university is placed on the list. 13 separate indicators are grouped into these five overarching criteria, which are made up of teaching, research, citations, international mix and industry income. While some of these scores are undoubtedly useful to students, such as teaching, many of the others that make up the bulk of the ranking are only relevant to academics or those pursuing an advanced degree.

These categories aren’t very useful for an undergraduate student looking to see where they will have the best educational experience

For students, a university’s published research output or income generated from innovation is not particularly helpful in delineating how well students will be treated there. Quality of research is not an indicator of quality of teaching or standard of degree, and many academics can do poorly in a classroom environment while excelling in a lab. These categories are useful for those pursuing high-level academia, but of less value to an undergraduate student looking to see where they will have the best educational experience or which university will give the best job prospects.

In fact, even the teaching category in the Times rankings is less relevant, in that it primarily evaluates a university on its reputation amongst academics and not on its reputation to companies or industry, factors which would be of importance for students looking to get the most earning and career potential out of their degree. While academic reputation is useful for assessing the consensus of international academic opinion, it’s not that helpful as a measure of directly relevant teaching quality. Problems that hinder student experience, such as Trinity’s complex bureaucracy and the accommodation crisis affecting Dublin, will not be exposed in such a measure. They outright improve or deduct from a degree and yet they aren’t exposed in the methodology that the rankings use.

If the Times Higher Education rankings pull categories together in this way, do the QS rankings do the same? They base their list on six performance indicators that assess research, teaching, employability and internationalisation. As with the THE list, academic reputation is measured using a global survey of leading academics, a measure which naturally has the same pitfalls as the Times’ rating. However, QS does take into account employer reputation, which makes the score more relevant to students and is unique amongst university rankings. While it only makes up 10 per cent of the actual rating, it places greater emphasis on votes from employers for universities outside their own country, and so assesses industry consensus on an international level.

In many cases, the measure just obscures fundamental issues – or successes – in a university’s policies and day-to-day running

The QS ranking also uses more “hard” data to quantify where a university places. Student-to-faculty ratio assesses whether a university provides small class sizes and so if it is equipped to provide students with a more individualised teaching experience. Citations-per-faculty, as with the Times list, operates under the assumption that the more research citations that a faculty has, the more influence it has, and the better quality experience it can provide. The final measures this ranking looks at refer to international faculty ratio and international student ratio, assuming that universities which attract more individuals from abroad have a greater impact.

The rankings methodologies are similar, although the QS list takes into account employer reputation, which makes its scores more relevant to the average student. Trinity places well on both lists with regards to its numbers of international students and staff, and also has success in citations. Its less successful with regards to more student-relevant measures, such as student-to-faculty ratio and QS’s employer reputation number. Trinity fares poorly in industry income and research in the Times list, but these criteria do not have as much weight for the average individual, unless they’re interested in pursuing a high-level further education degree.

The methodologies of the rankings expose their importance. There’s certainly a disconnect when it comes to how important the rankings are for academics, and how important they are for students looking for a high-quality undergraduate degree. In many cases, the measure just obscures fundamental issues – or successes – in a university’s policies and day-to-day running. While they may accurately discern the research impact of an institution, this may be far removed from the realities of a degree programme and they therefore should be assessed by students with a certain amount of care.

Sign Up to Our Weekly Newsletters

Get The University Times into your inbox twice a week.