It’s that time of year when academics all over the place get ready for another avalanche of marking and assessment. In my own case, while I really do miss teaching very much and am looking at ways of returning to it, I don’t miss marking. Not even slightly. And I feel for those who will, over the next couple of months, be inundated with it.
But is there another way? In fact, could we just give the job to computers? And might we find that they can grade essays and assignments and examinations just as effectively as we can? Well perhaps, according to a study conducted by researchers at the University of Akron. They compared grades given to 22,000 short essays written in American schools by live examiners with those recorded by computers running ‘automated essay scoring software’. The differences were, according to the researchers, ‘minute’.
I don’t know what kind of software this is, or how it works, or what its stated limitations might be, but this is a pretty amazing result. We know that computers can easily grade multiple choice examinations, but essays? And can we really imagine that an assignment intended to produce reasoned analysis could be assessed by machine? More generally, how much work has been done in considering the role that computers can play in designing, conducting and assessing teaching?
In fact, this is a subject of some interest in the education world. In July of this year there will be a conference in Southampton in England on computer-assisted assessment, and indeed there is a journal on the subject.
There are probably various contexts in which higher education assessment can be conducted by or with the help of software. But equally there are others where, at least from my perspective, it is unlikely that computers will be able to make robust qualitative judgements that could replicate human marking. Somehow I doubt that, in a few years, lecturers will no longer have to be examiners.