Now that the autumn season of university rankings is over, it may be worth reflecting a little on what they do or do not tell us, and what merit there may be in them. As is obvious from much academic commentary worldwide, and indeed from comments posted by readers in this blog, many in the higher education community do not like league tables and believe they play a negative role in developing universities. However, what is pretty much beyond doubt is that the rankings are here to stay and, for better or for worse, will continue to influence potential students, academics themselves and external stakeholders.
One question in particular is however worth asking: if teaching is still the core activity of most universities, how useful are rankings, given that on the whole they pay little or no attention to this? Just one teaching-related metric tends to have an impact, and that is the student to teacher ratio. This does tell us something about each institution, but it is based on what is now perhaps a financially non-viable assumption, i.e. that universities should strive to keep classes as small as at all possible, and that larger classes suggest poorer quality. The latter may well be true, but financial pressures are pushing everyone that way, and we need better ways now of differentiating between institutions in terms of teaching quality.
The publishers of the QS World University Rankings have set out the dilemma as follows:
‘In our opinion teaching quality, as opposed to teaching commitment, cannot be effectively ranked, because there are no independent experts and no suitable surrogate metrics.’
As is often said, the things that get measured get done. If rankings move into a new generation and neglect teaching quality, then academics will take their cue from that and will focus on whatever it is that gets results in the tables (chiefly research). It is urgently required that we address this and that we find acceptable ways of factoring in teaching quality.