Posted tagged ‘continuous assessment’

Assessing continuous assessment

August 29, 2011

In many ways, notwithstanding technological advances and social and demographic changes, education is still much the same in 2011 as it was a hundred years ago. Today’s student’s experience, from first entry into school to the final year at university, is not fundamentally different from that of previous generations. However, in higher education there has been one major shift: when I was a law student my degree result was based totally and exclusively on my performance in a number of written end-of-year examinations. Furthermore, these were all closed book exams. How I was marked depended on what I was able to remember from my courses and my analytical ability. Well, if I’m honest, analytical ability wasn’t that significant in the mix of things, and I know for a fact (because the examiner told me) that my inclination to add some critical assessment to my answers was held against me in at least one paper. ‘Better people than you,’ the examiner told me frankly, ‘have passed the laws and written the judgements. Your views on them are not material.’ Indeed.

But that’s not the case any longer, and for the past couple of decades there has been a growth of continuous assessment as part of the examining framework. Nowadays between 20 and 100 per cent of a student’s final result in a module may be based on their performance in projects, essays and exercises carried out as part of a continuous assessment programme throughout the year.

Furthermore, in a number of countries this practice has spread to schools. Increasingly the central or exclusive role of examinations has given way to some project work that is counted for the final results. Plans by the Irish Minister for Education and Skills, Ruairi Quinn TD, to reform secondary education in this way have however run into opposition, particularly from the trade unions. The latter have argued that this is not the time to undertake such reforms (given current budget cuts), or that the reform is misguided anyway. Others have suggested that developing continuous assessment in schools prompts the earlier onset of plagiarism, particularly as sources are freely available online.

At one level it seems to me that it is not the role of the teacher unions to have a veto on education policy reform, though of course they are entitled to defend their members’ material interests. But more generally, examination-only assessment in the education system undermines society’s need for educated citizens with critical and analytical abilities and a capacity for lateral thinking. It is time for a proper combination of memory testing (which is also still relevant) and the encouragement of a more intellectual engagement with the subject matter of the curriculum. It is time for these reforms.

Have examinations failed?

July 20, 2010

Earlier this year I wrote a post for this blog in which I wondered whether continuous assessment as the principal form of evaluating student performance could be sustained, given budgetary constraints and the problems of plagiarism. But even as I was thinking such thoughts, elsewhere the opposite trend was being mooted: in Harvard University (according to Harvard Magazine) the Faculty of Arts and Sciences has adopted a motion that provides that unless the lecturer declares otherwise well in advance, courses will no longer have end-of-term exams. The current position in Harvard is that only 258 out of 1,137 courses still have any final exams, and it is likely that this number will now drop much further.

So what are we to conclude?  Probably that the whole framework of assessing academic programmes needs to be re-considered. On the one hand, current pedagogical thinking suggests that continuous assessment may be the most appropriate way of evaluating students; on the other hand, continuous assessment is so labour intensive that in the current funding environment it may no longer be affordable. The problem right now is that the strategic reviews of higher education are focusing on organisational structure, but are largely neglecting vital pedagogical issues such as this.

We are no longer sure what exactly it is that we need to assess, and how we should assess it. Answering that question is much more important than wondering about whether our universities and colleges should merge. But nobody is really addressing it.

Adapting to changing times: farewell, continuous assessment?

February 16, 2010

In 1978 I sat the final examinations for my undergraduate degree at a certain Dublin college. I remember the exams well;  they took place in late September, as was the custom there at the time (and these were not repeats). I sat my final examination (in European law, if memory serves) on a Friday afternoon, and on the Monday following I was due to register as a PhD student in Cambridge. It was really quite a crazy system, and not long afterwards the same Dublin college moved its exams from September to June.

But actually, I digress. Back in September 1978, as I answered my final question – on the economic impact of the European Economic Community’s competition policy – I had done everything I needed to do to qualify for my BA honours degree, and all of it was through examination. Over the four years of study I had submitted goodness knows how many essays and other assignments, but none of these counted for my final results.

Two years later I was myself a lecturer, and it took several years in that role before I set the first assignment for students that would count towards their degree results. If I remember rightly it was in 1986. But in the years since then, most universities have radically changed their assessment methods, and continuous assessment (in the form of essays or projects or laboratory work) became the norm in most programmes, accounting for a significant proportion of  the final results. In some institutions (including at least one in Ireland) it is now common for all of the marks for particular modules to come from continuous assessment. All of this has grown out of a consensus amongst educationalists, or at least many of them, that such methods of monitoring learning are better, encourage more sophisticated analysis, require independent learning, promote motivation and so forth.

Having read some really wonderful essays and projects submitted by students under such programmes when I was still lecturing, I can see the point of such arguments. And yet, at least part of me has always been sceptical, and right now my scepticism is winning out.

There are two main reasons for my doubts. First, I fear that many lecturers are being overwhelmed by the assaults of plagiarism. It’s not that everyone plagiarises, but a significant minority of students do, and this requires a degree of vigilance and perceptiveness by lecturers that may place impossible demands on them. But secondly and more importantly, I believe we are about to realise that we simply don’t have the resources to run continuous assessment properly. Assignments that count for degree results are coming in all the time, and when they do the lecturer has to correct them with a high degree of conscientiousness, and when that task is done and results have been verified, has to provide feedback to the student that will serve as appropriate guidance. These are incredibly labour-intensive tasks. And they often come on top of the more traditional examining duties, now usually at two points of the year.

I don’t believe this is sustainable. As funding is reduced radically, we have to ask ourselves whether we really can go on managing a system that is not being resourced. I fear that, often, continuous assessment that is being conducted by an over-worked lecturer can be quite damaging, particularly if the main point (the feedback) is lost as the lecturer simply does not have the time to offer it. In the end we may have to accept that the time for such methods has passed and that we may need to give more prominence again to examinations, which have the additional benefit that they make plagiarism much more difficult.

Continuous assessment has been a worthwhile educational experiment. But I fear it is no longer sustainable.

The highs and lows of examinations

September 3, 2008

My own English-language education experience was remarkably consistent for its entire duration. Both at school and at university, I received instruction through face-to-face contact with teachers, and at the end of the course I was tested in a written examination on my retained knowledge, with the exam typically determining how I was deemed to have performed in the subject. At school the exam result was balanced by what we would now call continuous assessment, but mostly the overall result depended on the examination. At university there was, at the time, no continuous assessment at all: the exam result was absolutely everything.

My German language experience was somewhat different. At my secondary school in Germany, I was tested at various intervals during the year through so-called ‘Arbeiten’, which were written assignments, some (but not all) performed under exam conditions. These were staggered through the year, so that I gradually built up my performance profile. The final grade on leaving school was determined by the exam called the Abitur, which had a written and an oral element.

Fast forward to 2000, the last year during which I undertook regular teaching duties at my then university, the University of Hull. My main module – the one I taught by myself – did not have an examination element at all, but was assessed entirely through project work carried out during the year. Other modules to which I contributed had mixed elements of examination and continuous assessment.

In fact, it has for a while been a topic of pedagogical debate as to whether exams are a good way of testing ability and achievement, or a bad way. Opinions are divided. Some believe that as they are conducted in conditions where things such as plagiarism and cheating can be controlled they are a more accurate reflection of a student’s performance; others believe that they encourage memory exercises only and discourage intellectual ingenuity or independent thinking. Others simply don’t know and hedge their bets (and support a mixed mode).

It is perhaps time that this issue was handled more systematically. It is of course likely that not all learning can be tested in the same way. First years students need to be assessed differently from final year students, and those doing a PhD need to be tested in a wholly different way. But we do need to have a clear understanding, on pedagogical grounds, of what is right in each case, and there should be more consistency, even in a system that allows for variety. And we need to come to an understanding of the potential and risks involved in online testing, with multiple choice or other formats.

There has been some research on this – see this book, for example – but in practice there is little sign that an integrated approach based on evidence and analysis is being applied. It is probably time for that now.