Higher education performance
The statement usually attributed to the author and management consultant Peter Drucker – that ‘what gets measured gets done’ – has nowhere been as enthusiastically adopted as in higher education over recent decades. Anyone working in a university across much of the world will be aware of performance criteria which govern everything from institutional funding to personal career development. So we assess the student’s ‘learning outcomes’ and examination results, the professor’s publications, the university’s attrition rate; in fact, anything we believe we can measure. The statistical outputs from all this, unmediated by any coherent analysis, then get themselves published as some table or other that, in turn, will determine resources.
It is hard to argue against league tables, because these present an assessment of performance, however imperfect, and thereby allow interested onlookers to form a judgement about institutional quality. Those putting forward a case for universities to be left alone and find their own way of delivering good quality without external interference are not going to find a sympathetic audience: it is not the spirit of the age. Nevertheless, there is also evidence that the search for performance indicators has distorted strategy and sometimes incentivised very questionable policies. For example, it has led to a serious downgrading of teaching as against research. So what should be done?
First, if we are to have performance indicators, we should have fewer. In a recent presentation to a meeting on higher education strategy, the chief executive of Ireland’s Higher Education Authority, Tom Boland, listed 32 key performance indicators that could be used to inform strategy. Another example is the list recently proposed for Portuguese universities (in this document, at page 10). However, the impact of such lists is to reduce strategy to ticking boxes (to use that rather annoying expression); it is no longer strategy, but risk management, the key risk to be managed being the risk of losing public money.
Secondly, if you are setting up performance indicators, keep them consistent. In the Boland list inputs are mixed with outputs in a way that is unlikely to produce anything coherent. Also, relatively trivial indicators are found competing with more fundamental ones. Looking at them all together you cannot get any sense of mission or direction, you just have a list.
Thirdly, keep them relevant. Just because something can be measured doesn’t mean that it tells us anything if it is. Yet there is a lot of evidence that reporting on certain aspects has been required not because it is useful but because it is possible.
Overall, it is hard to resist the suspicion that the whole culture of performance indicators has been more one of bureaucratisation than of transparency. And yet, clarity about purpose, mission, priorities is important; as is the capacity to report on how successful these have been. It’s not that we shouldn’t do this, we just need to do it better.