Posted tagged ‘RAE’

In this game is the REF to blame?

December 23, 2014

Anyone working in and around higher education in the United Kingdom will have been obsessing about the ‘Research Excellence Framework’ (REF) over the past week. According to the REF website, it is ‘the new system for assessing the quality of research in UK higher education institutions’. A total of 154 institutions made 1,911 submissions to this exercise, and last week they found out how they had fared. The results will influence a number of things, including league table positions of universities and public funding. They will also have reinforced a trend to focus research attention and funding on a smaller number of institutions.

REF is the successor to the Research Assessment Exercise, which in turn had been around since the 1980s. The first one of these I had to deal with was conducted in 1992, when I was Dean of the Law School of the University of Hull. While I believe I was rather successful in managing the RAE, in that my department improved hugely between 1992 and the next exercise in 1998, I now believe that most of the decisions we took were good for the RAE and bad for research. In fact, that could be the overall summary for the whole process across the country from the beginnings right up to last week’s REF.

And here are three reasons.

  1. The RAE and REF have, despite claims to the contrary, punished interdisciplinarity, because the units of assessment overwhelmingly focus on outputs within rather than between disciplines. The future of research is interdisciplinary – but academics worried about REF will be wary of focusing too much on such work.
  2. Despite the way in which it aims to reward international recognition, a key impact of the RAE/REF framework is to promote mediocrity. For funding and related reasons, many institutions will try to drive as many academics as possible into published research, spending major resources on pushing average researchers to perform – resources that should really be devoted to supporting those who have the most promise. Of course some excellent researchers have been able to thrive, but in many institutions the RAE/REF process has hindered rather than supported real excellence. On top of that it has diverted some staff from doing what they do really well into doing things they don’t much like. One of the casualties of that, incidentally, is collegiality.
  3. The RAE/REF has produced a stunning bureaucratisation of research. A key difference between research management in my last university in Ireland (where there is no such exercise) and in my current one is the extraordinary amount of time staff have to put into the tactical, operational and administrative maintenance of the REF industry. Also, I shudder to think how much time and resources institutions will have spent last week managing the news of the results. Industrial-scale bureaucracy of course also produces huge costs.

Other equally good reasons for doubting the value of REF have been given by Professor Derek Sayer of Lancaster University, writing in the Guardian.

I am not against competition in research, nor do I believe that research performance should not be monitored. But the RAE/REF process is about ranking universities rather than promoting research. I have no reason to think that anyone who matters is listening, but it is time to think again about this process.

Advertisement

UK Research Assessment – some additional observations

December 18, 2008

From the perspective of the UK universities, the outcome of the Research Assessment Exercise is significant in two different ways. First, the RAE has perhaps the greatest impact on reputation of all metrics used to compile league tables; there is evidence that event student choices are heavily influenced by them. Secondly, they help to determine state funding (though the precise impact of these latest results on money is not yet known). In past RAEs there has been some conflict between these two factors, as institutions struggled to work out how best to maximise both money and reputation in the decisions about how many staff to enter: if you entered more staff and scored well, the financial benefits were highest – but if the gamble failed and you scored badly, the negative impact on reputation could be immense. It was all essentially a gamble.

In previous RAE outcomes, the exercise tended also to reinforce the – formally abolished but still alive and kicking – binary divide between ‘old’ universities and ‘new’ ones (the former polytechnics). This appears to have been undermined by the 2008 RAE results. The lowest place ‘old’ university appears to be the University of Wales at Lampeter, at No 83. The highest placed ‘new’ university is the University of Hertforshire, at No 58. Between these two there is a mix between the old and the new. So while all universities above 58 are old universities, and all below 83 are new ones or various specialist institutions, the boundary has become more fluid. And in some subject areas this fluidity is more pronounced, with new universities reaching the top ranks in some areas. In Ireland of course it is different anyway, with the two ‘new’ universities (in chronological terms), DCU and the University of Limerick, either performing to the same standard as the older ones in research, or sometimes out-performing them.

So what do we make of all this? Are such exercises useful? I have no direct experience of the preparations for the UK’s 2008 RAE, but I was heavily involved in managing subject units for the University of Hull (which in the whole has not done well in 2008) for the previous two RAEs. On the positive side, I found that the RAE had driven home in the academic community the importance of research performance as a way of building up an international reputation – so vital for any national university sector that wants to compete in quality internationally. On the negative side, the focus of the earlier RAE cycles was to combat research non-performance amongst academics, and the result of this tended to be heavy-handed methods to compel performance on the part of staff who, even when they did produce output, were never going to set the world on fire. A lot of very mediocre published output resulted, and arguably truly excellent staff were neglected.

I believe it is right to assess research, and I suspect that we shall be moving in that direction in Ireland. But I also think we should learn from the UK’s RAE and ensure that we avoid the mistakes, and the extraordinary bureaucracy, that accompanied the earlier cycles. If we can do that, we may be able to develop our own system in a way that genuinely adds transparency to the system.

Research assessment

December 18, 2008

Today is December 18th, and the UK Research Assessment Exercise results are published. The RAE website is here, and the full results cane be read or downloaded here. I have had a rather cursory glance at these, and there appear to be some surprising results, with some older universities doing less well than expected. The Northern Ireland universities, as far as I can see, have at least in some subjects fared well. I shall try to present a more informed view of the results when I have studied them more closely.

The results are being presented somewhat differently from before, and it is now possible to see in much more detail how units have performed within their subject areas, and in particular what the distribution of staff performance is within specific subject areas.

The debate will now begin again, I imagine, as to how useful this whole exercise is, and the extent to which it does actually promote quality. Here in DCU, we have over the past year or so conducted our own research assessment, and we expect to give some results from that before long. I expect that, at some point, there will be a sector-wide exercise of this nature in Ireland; and when that happens, assuming it does, I hope we avoid some of the mistakes which, in my opinion, were made in the UK.