Posted tagged ‘research assessment exercise’

In this game is the REF to blame?

December 23, 2014

Anyone working in and around higher education in the United Kingdom will have been obsessing about the ‘Research Excellence Framework’ (REF) over the past week. According to the REF website, it is ‘the new system for assessing the quality of research in UK higher education institutions’. A total of 154 institutions made 1,911 submissions to this exercise, and last week they found out how they had fared. The results will influence a number of things, including league table positions of universities and public funding. They will also have reinforced a trend to focus research attention and funding on a smaller number of institutions.

REF is the successor to the Research Assessment Exercise, which in turn had been around since the 1980s. The first one of these I had to deal with was conducted in 1992, when I was Dean of the Law School of the University of Hull. While I believe I was rather successful in managing the RAE, in that my department improved hugely between 1992 and the next exercise in 1998, I now believe that most of the decisions we took were good for the RAE and bad for research. In fact, that could be the overall summary for the whole process across the country from the beginnings right up to last week’s REF.

And here are three reasons.

  1. The RAE and REF have, despite claims to the contrary, punished interdisciplinarity, because the units of assessment overwhelmingly focus on outputs within rather than between disciplines. The future of research is interdisciplinary – but academics worried about REF will be wary of focusing too much on such work.
  2. Despite the way in which it aims to reward international recognition, a key impact of the RAE/REF framework is to promote mediocrity. For funding and related reasons, many institutions will try to drive as many academics as possible into published research, spending major resources on pushing average researchers to perform – resources that should really be devoted to supporting those who have the most promise. Of course some excellent researchers have been able to thrive, but in many institutions the RAE/REF process has hindered rather than supported real excellence. On top of that it has diverted some staff from doing what they do really well into doing things they don’t much like. One of the casualties of that, incidentally, is collegiality.
  3. The RAE/REF has produced a stunning bureaucratisation of research. A key difference between research management in my last university in Ireland (where there is no such exercise) and in my current one is the extraordinary amount of time staff have to put into the tactical, operational and administrative maintenance of the REF industry. Also, I shudder to think how much time and resources institutions will have spent last week managing the news of the results. Industrial-scale bureaucracy of course also produces huge costs.

Other equally good reasons for doubting the value of REF have been given by Professor Derek Sayer of Lancaster University, writing in the Guardian.

I am not against competition in research, nor do I believe that research performance should not be monitored. But the RAE/REF process is about ranking universities rather than promoting research. I have no reason to think that anyone who matters is listening, but it is time to think again about this process.

Advertisements

Assessing the value of research

November 6, 2009

I was at a function recently when I was accosted  by one of the other persons attending. How, she wanted to know, could I justify all that ‘useless research’ that was going on in my university?  She wasn’t against research – not at all, in fact: she wanted us to find the cure for cancer, the answer to Dublin’s traffic problems, and a solution to all those under-funded pension schemes. And instead, what were we working on? Well, she had heard someone say that research was being funded by the taxpayer to analyse the ‘syntax of Wordsworth’s poetry’! I mean, can you imagine?

My first response when she paused to draw breath was that DCU was working on the three topics she mentioned (well actually, I don’t think we’re working on Dublin’s traffic, but I wasn’t going to admit that). But, I pointed out, it was important for society that there would be some researchers who were not working to a particular practical agenda, because they might well discover things that nobody had yet anticipated but which would change our lives. OK, she conceded, but Wordsworth’s syntax? I had no idea who if anyone really was working on this, but I pointed out that such research might produce valuable insights into the effectiveness of communication (well, I had to think of something quickly…).

But even if I found this conversation a little annoying, she was raising an issue with which we do need to come to grips: what is university research for? Why do we do it, and why should it be funded? And how many strings should be attached to the funding? And how do we measure whether it has all been worthwhile? A good friend of mine, a very respected academic who is one of the global leaders in his discipline, argues from time to time that the only worthwhile research is useless research; once we are subjecting it to an impact assessment, he suggests, we are cheapening it.

All of this is at the heart of the new system to be introduced in Britain for evaluating research, the Research Excellence Framework (REF). This will be used (as a successor to the Research Assessment Exercise) to evaluate a university’s research performance and determine how much general research funding it should receive. One of the key criteria to be used will be ‘impact’. This is explained as follows: ‘significant … recognition will be given where researchers build on excellent research to deliver demonstrable benefits to the economy, society, public policy, culture and quality of life.’ In other words, this will assess whether the research can satisfy my friend at the function. And this has drawn some strong criticism from the academic community, as has been reported in the most recent issue of Times Higher Education. Academics have not been persuaded that the ‘impact’ of their research is always a relevant or fair criterion, not least because it may not be known when a research project is first planned.

I have some sympathy with this resistance. And yet, as society (and other funders) are being asked to provide the resources for research, it is not unreasonable that they should ask what it is for. So maybe we should resist a little less, and just get better at explaining the purpose of research, even research that is at first sight functionally ‘useless’. We are probably no longer in an era where we can answer ‘mind your own business’ to such questions and still hope to get resources, but equally we should be able to explain convincingly that, sometimes, research is justified because it will engage an intellectual agenda and because the pursuit of such an agenda is right for a civilised society, and for a society that wants to train the best minds to do the best they can. And sometimes it is justified because it cures cancer and makes the traffic flow.

UK Research Assessment – some additional observations

December 18, 2008

From the perspective of the UK universities, the outcome of the Research Assessment Exercise is significant in two different ways. First, the RAE has perhaps the greatest impact on reputation of all metrics used to compile league tables; there is evidence that event student choices are heavily influenced by them. Secondly, they help to determine state funding (though the precise impact of these latest results on money is not yet known). In past RAEs there has been some conflict between these two factors, as institutions struggled to work out how best to maximise both money and reputation in the decisions about how many staff to enter: if you entered more staff and scored well, the financial benefits were highest – but if the gamble failed and you scored badly, the negative impact on reputation could be immense. It was all essentially a gamble.

In previous RAE outcomes, the exercise tended also to reinforce the – formally abolished but still alive and kicking – binary divide between ‘old’ universities and ‘new’ ones (the former polytechnics). This appears to have been undermined by the 2008 RAE results. The lowest place ‘old’ university appears to be the University of Wales at Lampeter, at No 83. The highest placed ‘new’ university is the University of Hertforshire, at No 58. Between these two there is a mix between the old and the new. So while all universities above 58 are old universities, and all below 83 are new ones or various specialist institutions, the boundary has become more fluid. And in some subject areas this fluidity is more pronounced, with new universities reaching the top ranks in some areas. In Ireland of course it is different anyway, with the two ‘new’ universities (in chronological terms), DCU and the University of Limerick, either performing to the same standard as the older ones in research, or sometimes out-performing them.

So what do we make of all this? Are such exercises useful? I have no direct experience of the preparations for the UK’s 2008 RAE, but I was heavily involved in managing subject units for the University of Hull (which in the whole has not done well in 2008) for the previous two RAEs. On the positive side, I found that the RAE had driven home in the academic community the importance of research performance as a way of building up an international reputation – so vital for any national university sector that wants to compete in quality internationally. On the negative side, the focus of the earlier RAE cycles was to combat research non-performance amongst academics, and the result of this tended to be heavy-handed methods to compel performance on the part of staff who, even when they did produce output, were never going to set the world on fire. A lot of very mediocre published output resulted, and arguably truly excellent staff were neglected.

I believe it is right to assess research, and I suspect that we shall be moving in that direction in Ireland. But I also think we should learn from the UK’s RAE and ensure that we avoid the mistakes, and the extraordinary bureaucracy, that accompanied the earlier cycles. If we can do that, we may be able to develop our own system in a way that genuinely adds transparency to the system.

Research assessment

December 18, 2008

Today is December 18th, and the UK Research Assessment Exercise results are published. The RAE website is here, and the full results cane be read or downloaded here. I have had a rather cursory glance at these, and there appear to be some surprising results, with some older universities doing less well than expected. The Northern Ireland universities, as far as I can see, have at least in some subjects fared well. I shall try to present a more informed view of the results when I have studied them more closely.

The results are being presented somewhat differently from before, and it is now possible to see in much more detail how units have performed within their subject areas, and in particular what the distribution of staff performance is within specific subject areas.

The debate will now begin again, I imagine, as to how useful this whole exercise is, and the extent to which it does actually promote quality. Here in DCU, we have over the past year or so conducted our own research assessment, and we expect to give some results from that before long. I expect that, at some point, there will be a sector-wide exercise of this nature in Ireland; and when that happens, assuming it does, I hope we avoid some of the mistakes which, in my opinion, were made in the UK.