Posted tagged ‘exams’

Adapting to changing times: farewell, continuous assessment?

February 16, 2010

In 1978 I sat the final examinations for my undergraduate degree at a certain Dublin college. I remember the exams well;  they took place in late September, as was the custom there at the time (and these were not repeats). I sat my final examination (in European law, if memory serves) on a Friday afternoon, and on the Monday following I was due to register as a PhD student in Cambridge. It was really quite a crazy system, and not long afterwards the same Dublin college moved its exams from September to June.

But actually, I digress. Back in September 1978, as I answered my final question – on the economic impact of the European Economic Community’s competition policy – I had done everything I needed to do to qualify for my BA honours degree, and all of it was through examination. Over the four years of study I had submitted goodness knows how many essays and other assignments, but none of these counted for my final results.

Two years later I was myself a lecturer, and it took several years in that role before I set the first assignment for students that would count towards their degree results. If I remember rightly it was in 1986. But in the years since then, most universities have radically changed their assessment methods, and continuous assessment (in the form of essays or projects or laboratory work) became the norm in most programmes, accounting for a significant proportion of  the final results. In some institutions (including at least one in Ireland) it is now common for all of the marks for particular modules to come from continuous assessment. All of this has grown out of a consensus amongst educationalists, or at least many of them, that such methods of monitoring learning are better, encourage more sophisticated analysis, require independent learning, promote motivation and so forth.

Having read some really wonderful essays and projects submitted by students under such programmes when I was still lecturing, I can see the point of such arguments. And yet, at least part of me has always been sceptical, and right now my scepticism is winning out.

There are two main reasons for my doubts. First, I fear that many lecturers are being overwhelmed by the assaults of plagiarism. It’s not that everyone plagiarises, but a significant minority of students do, and this requires a degree of vigilance and perceptiveness by lecturers that may place impossible demands on them. But secondly and more importantly, I believe we are about to realise that we simply don’t have the resources to run continuous assessment properly. Assignments that count for degree results are coming in all the time, and when they do the lecturer has to correct them with a high degree of conscientiousness, and when that task is done and results have been verified, has to provide feedback to the student that will serve as appropriate guidance. These are incredibly labour-intensive tasks. And they often come on top of the more traditional examining duties, now usually at two points of the year.

I don’t believe this is sustainable. As funding is reduced radically, we have to ask ourselves whether we really can go on managing a system that is not being resourced. I fear that, often, continuous assessment that is being conducted by an over-worked lecturer can be quite damaging, particularly if the main point (the feedback) is lost as the lecturer simply does not have the time to offer it. In the end we may have to accept that the time for such methods has passed and that we may need to give more prominence again to examinations, which have the additional benefit that they make plagiarism much more difficult.

Continuous assessment has been a worthwhile educational experiment. But I fear it is no longer sustainable.

Advertisement

The highs and lows of examinations

September 3, 2008

My own English-language education experience was remarkably consistent for its entire duration. Both at school and at university, I received instruction through face-to-face contact with teachers, and at the end of the course I was tested in a written examination on my retained knowledge, with the exam typically determining how I was deemed to have performed in the subject. At school the exam result was balanced by what we would now call continuous assessment, but mostly the overall result depended on the examination. At university there was, at the time, no continuous assessment at all: the exam result was absolutely everything.

My German language experience was somewhat different. At my secondary school in Germany, I was tested at various intervals during the year through so-called ‘Arbeiten’, which were written assignments, some (but not all) performed under exam conditions. These were staggered through the year, so that I gradually built up my performance profile. The final grade on leaving school was determined by the exam called the Abitur, which had a written and an oral element.

Fast forward to 2000, the last year during which I undertook regular teaching duties at my then university, the University of Hull. My main module – the one I taught by myself – did not have an examination element at all, but was assessed entirely through project work carried out during the year. Other modules to which I contributed had mixed elements of examination and continuous assessment.

In fact, it has for a while been a topic of pedagogical debate as to whether exams are a good way of testing ability and achievement, or a bad way. Opinions are divided. Some believe that as they are conducted in conditions where things such as plagiarism and cheating can be controlled they are a more accurate reflection of a student’s performance; others believe that they encourage memory exercises only and discourage intellectual ingenuity or independent thinking. Others simply don’t know and hedge their bets (and support a mixed mode).

It is perhaps time that this issue was handled more systematically. It is of course likely that not all learning can be tested in the same way. First years students need to be assessed differently from final year students, and those doing a PhD need to be tested in a wholly different way. But we do need to have a clear understanding, on pedagogical grounds, of what is right in each case, and there should be more consistency, even in a system that allows for variety. And we need to come to an understanding of the potential and risks involved in online testing, with multiple choice or other formats.

There has been some research on this – see this book, for example – but in practice there is little sign that an integrated approach based on evidence and analysis is being applied. It is probably time for that now.