Taking charge of your own university rankings
Whenever I raise the topic of university rankings, I always get readers who, either in comments made here or in emails sent offline, will suggest that I really shouldn’t be paying so much attention to them or encouraging their authors. I know very well that many academics are very sceptical about league tables and don’t believe that they reflect any sort of reality; or they suspect that rankings prompt inappropriate behaviour by university managers, or in some other way undermine academic integrity.
In reality, however, league tables are part of the landscape, and this is so in part because those who want to enter into any kind of relationship with universities – whether as students or as faculty or as business partners or as donors – take them seriously and want to have them as a guide. We may wish that it were otherwise, but it isn’t. This being so, we need to engage with them, and in that way help to ensure that they are reasonable and accurate and transparent. So for example, the transformation over the past year or two of the Times Higher Education world rankings owes a lot to academic interaction with the journal and with the company they selected to manage the project.
The best known world rankings – those run by Times Higher Education and by Shanghai Jiao Tong University – have one important thing in common: the global top 10 universities are exclusively American and British. This is tolerated by Asian institutions that believe they are rising up the tables and are biding their time, but it disturbs the European Union and its member states. In both rankings the top non-British EU university only comes in at number 39 (French in each table, but not the same university).
Because of this the EU has set out to design its own rankings, to be known as U-Multirank. The thinking behind this is that the established league tables are too much focused on research outputs, and in particular on science research; they neglect teaching and don’t encourage diversity of mission, and they drive universities into strategies that they don’t have the means to deliver. So the new rankings are to be weighted differently, so that the resulting table would be more balanced; and moreover they are to allow users to design and weight their own criteria, so that students (say) can create their own league table that more accurately reflects the strengths they are looking for in considering universities.
Can this work? In my view, no – probably not. Rankings are not really meant to provide a method of institutional profiling, but rather are designed to set out a kind of reputational gold standard. They are not the answer to the question ‘what kind of institution is this?’ – rather, they answer the question ‘what does the world think of this institution?’ This may not be a scientific answer, or else all rankings would give us the same results, but it is an attempt at standardising external evaluation. Also, too many people will think of U-Multirank as an attempt to support the somewhat lesser known European universities and design the rules to suit them.
Still, if you’re interested, the U-Multirank project is coming to the end of a feasibility evaluation and, if this supports the idea (as it will), it will be rolled out some time over the next year or two. It will be interesting to see whether it attracts support. I suspect that it will not displace the pre-eminence of Times Higher.