Posted tagged ‘examinations’

My colleague the computer

April 30, 2012

It’s that time of year when academics all over the place get ready for another avalanche of marking and assessment. In my own case, while I really do miss teaching very much and am looking at ways of returning to it, I don’t miss marking. Not even slightly. And I feel for those who will, over the next couple of months, be inundated with it.

But is there another way? In fact, could we just give the job to computers? And might we find that they can grade essays and assignments and examinations just as effectively as we can? Well perhaps, according to a study conducted by researchers at the University of Akron. They compared grades given to 22,000 short essays written in American schools by live examiners with those recorded by computers running ‘automated essay scoring software’. The differences were, according to the researchers, ‘minute’.

I don’t know what kind of software this is, or how it works, or what its stated limitations might be, but this is a pretty amazing result. We know that computers can easily grade multiple choice examinations, but essays? And can we really imagine that an assignment intended to produce reasoned analysis could be assessed by machine? More generally, how much work has been done in considering the role that computers can play in designing, conducting and assessing teaching?

In fact, this is a subject of some interest in the education world. In July of this year there will be a conference in Southampton in England on computer-assisted assessment, and indeed there is a journal on the subject.

There are probably various contexts in which higher education assessment can be conducted by or with the help of software. But equally there are others where, at least from my perspective, it is unlikely that computers will be able to make robust qualitative judgements that could replicate human marking. Somehow I doubt that, in a few years, lecturers will no longer have to be examiners.

Advertisement

Assessing continuous assessment

August 29, 2011

In many ways, notwithstanding technological advances and social and demographic changes, education is still much the same in 2011 as it was a hundred years ago. Today’s student’s experience, from first entry into school to the final year at university, is not fundamentally different from that of previous generations. However, in higher education there has been one major shift: when I was a law student my degree result was based totally and exclusively on my performance in a number of written end-of-year examinations. Furthermore, these were all closed book exams. How I was marked depended on what I was able to remember from my courses and my analytical ability. Well, if I’m honest, analytical ability wasn’t that significant in the mix of things, and I know for a fact (because the examiner told me) that my inclination to add some critical assessment to my answers was held against me in at least one paper. ‘Better people than you,’ the examiner told me frankly, ‘have passed the laws and written the judgements. Your views on them are not material.’ Indeed.

But that’s not the case any longer, and for the past couple of decades there has been a growth of continuous assessment as part of the examining framework. Nowadays between 20 and 100 per cent of a student’s final result in a module may be based on their performance in projects, essays and exercises carried out as part of a continuous assessment programme throughout the year.

Furthermore, in a number of countries this practice has spread to schools. Increasingly the central or exclusive role of examinations has given way to some project work that is counted for the final results. Plans by the Irish Minister for Education and Skills, Ruairi Quinn TD, to reform secondary education in this way have however run into opposition, particularly from the trade unions. The latter have argued that this is not the time to undertake such reforms (given current budget cuts), or that the reform is misguided anyway. Others have suggested that developing continuous assessment in schools prompts the earlier onset of plagiarism, particularly as sources are freely available online.

At one level it seems to me that it is not the role of the teacher unions to have a veto on education policy reform, though of course they are entitled to defend their members’ material interests. But more generally, examination-only assessment in the education system undermines society’s need for educated citizens with critical and analytical abilities and a capacity for lateral thinking. It is time for a proper combination of memory testing (which is also still relevant) and the encouragement of a more intellectual engagement with the subject matter of the curriculum. It is time for these reforms.

Drug testing in the examination hall?

July 26, 2011

It’s early in the morning, and you are about to sit an examination. You didn’t sleep well last night. You are tired, and you honestly cannot remember much about your subject. You are nervous. To steady your nerves you drink a strong cup of coffee. Stop! re you taking a drug there that may enhance your powers? You’ll be disqualified.

Does that sound fanciful? Perhaps, but what if you took a neuro-enhacing smart drug, perhaps the cerebral equivalent of the sportsman’s steroids? Now are you cheating? Or a can of Red Bull?

A senior lecturer in Pharmacology in Trinity College Dublin, Dr Andrew Harkin, has according to a report in the Irish Independent suggested that this amounts to cheating. Clearly one would not want to encourage the use of drugs for any reason, including this one, and where such use is illegal it should have the consequences set out in the law. But is it ‘cheating’? Are examinations a competitive sport? Neither caffeine nor drugs upload information to your brain. You will only answer what you know.

I believe that the approach suggested byDr Harkin is misguided. Rather, we need to look again at the culture of cut-throat competition and the unnerving of students by the expectations of families, their peers and the wider society. Learning is important, but it should never be intimidating. We need to look again at the whole culture of higher education, and thereby make it less attractive for students to consider drugs. And I suspect they might sleep easier at night before the exams.

Laptop exams

May 23, 2011

About a year ago I asked a group of students what they most disliked about their studies. One gripe they all had in common was examinations – but not that they were obliged to sit these, but rather that the answers had to be written out by hand. One of them told me that exams were the only thing for which he now used handwriting. Everything else, even lecture notes, he now did on a laptop or smartphone (or maybe now a tablet computer). He no longer possessed even one notebook; he could not see why he would ever need one. He did possess a ballpoint pen, but he illustrated its usefulness to him by telling me that, since he had been given it three years previously, he had never needed to buy a new refill. So why, he asked, should he be forced to use it for exams? If we were going to be that retro-minded, why not go the whole hog and insist on students using quill pens with ink?

Well, if he were a student at Edinburgh University he might be about to get some relief. According to a report in the newspaper Scotland on Sunday, the university is considering allowing students to use laptops in exams, though with software installed that would prevent them from going online or using other programs during the exam. According to the report, ‘senior officials at Edinburgh University believe it is unfair to expect undergraduates to resort to pens and paper during critical assessments when most of their coursework is done using a keyboard.’ Not only is the physical process of writing by hand now stressful for them, but they are also used to planning their work and executing it on a computer, so that doing it by hand unsettles them. In fact, Edinburgh are not altogether pioneers – secondary school students in Norway have been using laptops in examinations since 2009, as have students in some American universities.

It is probably true to say that we have barely scratched the surface in considering what modern technology may do to change teaching and pedagogy, and how it is affecting the students’ perceptions of study methods and assessment. Right now it looks as if universities will, without any particular coordination, slip into new ways of doing things, though quite probably not consistently. It is time to pay a little more attention to these matters.

University admissions: time to re-think the criteria

February 7, 2011

This is going to sound very grand, but over the past few years I have been trying to persuade the education sector and politicians of two things: the socially undesirable and financially unsustainable nature of ‘free’ higher education that is not adequately funded; and the damage being inflicted on Ireland by the Leaving Certificate examination (the Irish final school exam).

I want to focus briefly on one aspect of the second of these, the Leaving Certificate. I believe that, having once been quite innovative, it is now a thoroughly flawed exam with a wholly unsatisfactory curriculum attached to it. But it is also more or less the sole basis on which school leavers are admitted to university, so that the whole sad heap of its inadequacies infects higher education. There is an urgent need to reform the Leaving Certificate, but one way of advancing that agenda is to decouple it completely from university admissions.

There are many reasons for doing this, but the three most important are the following. First, the Leaving Certificate tests all the wrong skills and therefore doesn’t prepare students for higher education. Secondly, students make inappropriate subject choices in secondary school based on what examinations in those subjects will do to help them into university; let us stop that. Thirdly, the Leaving Certificate fuels the points system, with its mad impact on career choices.

In other countries, notably Britain, there is some evidence that using final school exams for university entrance purposes reinforces inequalities and condems students from disadvantaged backgrounds and minorities.

It would be more appropriate and also fairer to set minimum entry requirements for all subjects, and then apply a lottery system to all those where demand outstrips supply. That would allow us, at last, to stop the deeply flawed pedagogy of the school system from undermining our society and our economy. It is time to take this problem seriously.

Assessments and examinations at risk

January 31, 2011

As governments in a number of countries try to square the circle of rising higher education participation rates and budget (and therefore faculty) cuts, one thing in particular should be borne in mind: the risk to the quality of exam and assignment correction. Only academics can really know the burden that descends on them at certain times of the year, when large numbers of papers have to be corrected and scored in a very short space of time, and detailed feedback provided for students. And while it is possible (though undesirable) to cram more students into a hall to hear a lecture, when these students produce examination papers, essays and projects the volume of this material may overwhelm the declining number of academics who have to carry out the corrections.

Initially, the risk is not that the job will not be done, but rather that it will be done too hastily. In the longer run the quality of the higher education experience is at risk.

When I was still teaching actively I always enjoyed and was greatly stimulated by the teaching. But even then I found exam correction a source of great pressure, both because of the numbers involved and because I was very aware of the responsibility that rested on me when I was doing this. As governments continue to push for greater participation in higher education while cutting the resources, they are creating a quality risk that will, in the end, have serious consequences.

Have examinations failed?

July 20, 2010

Earlier this year I wrote a post for this blog in which I wondered whether continuous assessment as the principal form of evaluating student performance could be sustained, given budgetary constraints and the problems of plagiarism. But even as I was thinking such thoughts, elsewhere the opposite trend was being mooted: in Harvard University (according to Harvard Magazine) the Faculty of Arts and Sciences has adopted a motion that provides that unless the lecturer declares otherwise well in advance, courses will no longer have end-of-term exams. The current position in Harvard is that only 258 out of 1,137 courses still have any final exams, and it is likely that this number will now drop much further.

So what are we to conclude?  Probably that the whole framework of assessing academic programmes needs to be re-considered. On the one hand, current pedagogical thinking suggests that continuous assessment may be the most appropriate way of evaluating students; on the other hand, continuous assessment is so labour intensive that in the current funding environment it may no longer be affordable. The problem right now is that the strategic reviews of higher education are focusing on organisational structure, but are largely neglecting vital pedagogical issues such as this.

We are no longer sure what exactly it is that we need to assess, and how we should assess it. Answering that question is much more important than wondering about whether our universities and colleges should merge. But nobody is really addressing it.

Could (or should) we separate teaching and examining?

June 12, 2010

The new British Universities Minister, David Willetts, has suggested in a speech at Oxford Brookes University that there might be advantages in allowing new institutions to enter the higher education market to offer programmes that would then be examined by other (older) universities with ‘an established exam brand with global recognition’.

The major idea behind this suggestion appears to be the desire to admit new institutions into higher education. These would be able to establish themselves more quickly by linking to examinations (and, presumably, the syllabus) of highly reputable existing universities. This would be an extension, presumably, of the franchising of degree programmes that has been a feature of British higher education for the past decade or more. The new model would separate teaching and examining in a way that is similar to the secondary school system.

I confess I don’t find it easy to see the point of such a proposal. Restructuring the higher education sector in such a way that new colleges do service teaching for older universities (who then examine the outputs) does not seem to me to solve any of the various problems facing the sector right now. In any case, a proliferation of higher education providers with a teaching-only agenda may create its own quality assurance issues.

I suspect we all need to think again at how we can re-imagine university teaching to allow it to cope with the new resourcing environment. But I have serious doubts whether this proposal is the answer.

Adapting to changing times: farewell, continuous assessment?

February 16, 2010

In 1978 I sat the final examinations for my undergraduate degree at a certain Dublin college. I remember the exams well;  they took place in late September, as was the custom there at the time (and these were not repeats). I sat my final examination (in European law, if memory serves) on a Friday afternoon, and on the Monday following I was due to register as a PhD student in Cambridge. It was really quite a crazy system, and not long afterwards the same Dublin college moved its exams from September to June.

But actually, I digress. Back in September 1978, as I answered my final question – on the economic impact of the European Economic Community’s competition policy – I had done everything I needed to do to qualify for my BA honours degree, and all of it was through examination. Over the four years of study I had submitted goodness knows how many essays and other assignments, but none of these counted for my final results.

Two years later I was myself a lecturer, and it took several years in that role before I set the first assignment for students that would count towards their degree results. If I remember rightly it was in 1986. But in the years since then, most universities have radically changed their assessment methods, and continuous assessment (in the form of essays or projects or laboratory work) became the norm in most programmes, accounting for a significant proportion of  the final results. In some institutions (including at least one in Ireland) it is now common for all of the marks for particular modules to come from continuous assessment. All of this has grown out of a consensus amongst educationalists, or at least many of them, that such methods of monitoring learning are better, encourage more sophisticated analysis, require independent learning, promote motivation and so forth.

Having read some really wonderful essays and projects submitted by students under such programmes when I was still lecturing, I can see the point of such arguments. And yet, at least part of me has always been sceptical, and right now my scepticism is winning out.

There are two main reasons for my doubts. First, I fear that many lecturers are being overwhelmed by the assaults of plagiarism. It’s not that everyone plagiarises, but a significant minority of students do, and this requires a degree of vigilance and perceptiveness by lecturers that may place impossible demands on them. But secondly and more importantly, I believe we are about to realise that we simply don’t have the resources to run continuous assessment properly. Assignments that count for degree results are coming in all the time, and when they do the lecturer has to correct them with a high degree of conscientiousness, and when that task is done and results have been verified, has to provide feedback to the student that will serve as appropriate guidance. These are incredibly labour-intensive tasks. And they often come on top of the more traditional examining duties, now usually at two points of the year.

I don’t believe this is sustainable. As funding is reduced radically, we have to ask ourselves whether we really can go on managing a system that is not being resourced. I fear that, often, continuous assessment that is being conducted by an over-worked lecturer can be quite damaging, particularly if the main point (the feedback) is lost as the lecturer simply does not have the time to offer it. In the end we may have to accept that the time for such methods has passed and that we may need to give more prominence again to examinations, which have the additional benefit that they make plagiarism much more difficult.

Continuous assessment has been a worthwhile educational experiment. But I fear it is no longer sustainable.

Please don’t let me be misunderstood

February 2, 2010

When I was still actively teaching, I would occasionally experience one of those moments when I couldn’t quite say whether I felt annoyed or amused: when I would receive an essay, or exam answer, or some other student contribution that would make it clear that the student had not understood a word I was saying. One student, for example, gave me a essay which included some discussion of what he termed ‘the industrial relations practice of piggoting’. I was intrigued by this (after all, anyone can get a spelling wrong), but the word was used in this form repeatedly, and the narrative suggested to me he didn’t know what he was actually talking about.

So I called him in. ‘Tom’ (name changed), I said, ‘you write a fair bit about “piggoting” – what do you think that is?’ He started to answer in very vague terms, but perhaps seeing my quizzical look he stopped himself and said much more decisively: ‘It’s very quick action in an industrial dispute.’ Okayyyee – and where do you think the word comes from? Now somewhat uneasy, he hummed and hawed, and then suggested it was derived from the name of the jockey Lester Piggott. Right! And then I realised that my whole course was probably a total mystery to this poor guy. To this day I wonder whether he has ever been asked to stop at a ‘piggot line’.

But it is not just in universities. I have a copy somewhere of a debate in the late 1970s in the European Parliament. The official record shows that a member of the Parliament asked a question about the European market for ‘farmers’ uticles’. You might spend some time wondering about what ‘uticles’ could be or how farmers use them. But if you keep reading the debate it will eventually dawn on you that the question was actually about ‘pharmaceuticals’, and the stenographer clearly had no idea what this was about and just wrote down something that sounded right.

But my own favourite was the examination answer written for me by a student in the early 1980s, who stated with great confidence that decrees of annulment for marriages were most commonly based on ‘impudence’. Quite right.