My own English-language education experience was remarkably consistent for its entire duration. Both at school and at university, I received instruction through face-to-face contact with teachers, and at the end of the course I was tested in a written examination on my retained knowledge, with the exam typically determining how I was deemed to have performed in the subject. At school the exam result was balanced by what we would now call continuous assessment, but mostly the overall result depended on the examination. At university there was, at the time, no continuous assessment at all: the exam result was absolutely everything.
My German language experience was somewhat different. At my secondary school in Germany, I was tested at various intervals during the year through so-called ‘Arbeiten’, which were written assignments, some (but not all) performed under exam conditions. These were staggered through the year, so that I gradually built up my performance profile. The final grade on leaving school was determined by the exam called the Abitur, which had a written and an oral element.
Fast forward to 2000, the last year during which I undertook regular teaching duties at my then university, the University of Hull. My main module – the one I taught by myself – did not have an examination element at all, but was assessed entirely through project work carried out during the year. Other modules to which I contributed had mixed elements of examination and continuous assessment.
In fact, it has for a while been a topic of pedagogical debate as to whether exams are a good way of testing ability and achievement, or a bad way. Opinions are divided. Some believe that as they are conducted in conditions where things such as plagiarism and cheating can be controlled they are a more accurate reflection of a student’s performance; others believe that they encourage memory exercises only and discourage intellectual ingenuity or independent thinking. Others simply don’t know and hedge their bets (and support a mixed mode).
It is perhaps time that this issue was handled more systematically. It is of course likely that not all learning can be tested in the same way. First years students need to be assessed differently from final year students, and those doing a PhD need to be tested in a wholly different way. But we do need to have a clear understanding, on pedagogical grounds, of what is right in each case, and there should be more consistency, even in a system that allows for variety. And we need to come to an understanding of the potential and risks involved in online testing, with multiple choice or other formats.
There has been some research on this – see this book, for example – but in practice there is little sign that an integrated approach based on evidence and analysis is being applied. It is probably time for that now.