Guest blog: Has there been a decline in educational standards?

Dr Brendan Guilfoyle, Institute of Technology, Tralee

 

Almost two years ago a post on this blog led to the following question: what do we mean by educational standards? While matters have moved on somewhat since that initial debate on grade inflation, it is still worthwhile to attempt to answer this question.

In a recent paper ‘New Metrics for Detecting Changes in Educational Standards, I have considered it from a theoretical point of view and applied the conclusions to updated data from the Irish educational system at second and third level. This is paper 9 of a series published by the Network for Irish Educational Standards investigating this issue in a broader context.

I hope the methods and conclusions are of interest to the academic community and I would like to thank Professor von Prondzynski for affording me the opportunity of this guest post to summarize the results.

The theoretical framework adopted has three fundamental assumptions. These are that:

  1. we are dealing with a mass education system,
  2. the results of assessment provides sufficient grade differentiation,
  3. assessment measures student performance against some hierarchical taxonomy of activity.

The first assumption is to allow for robust statistical analysis, the second means we do not consider pass/fail systems, and the final assumption means that assessment seeks to measure student performance against some framework of activity defining the educational standard. It can be as abstract as Bloom’s famous taxonomy of cognitive activity or it can be more specifically articulated to a particular set of learning objectives.

For our purposes the details of the taxonomy are not important. The key feature is that it is hierarchical: the scale of activity has an ordering that is cumulative. Thus a lower activity must be mastered in order to advance to the next level of activity. Conversely, those who have mastered the higher levels find it easier to perform lower activities. Only a belief in a radical dissociativity of cognitive activity would lead one to reject this assumption for second or third level education, and such a Fordist belief does not appear to be widespread among contemporary educationalists.

Consider then an educational system satisfying our three assumptions: that is, an educational system which produces a grade for a large number of students which measures their performance against a hierarchical taxonomy. This grade could be arrived at in any number of ways and could be, for example, an aggregation of a number of measurements. The resulting grade distribution reflects the attainment of the student cohort against the standard.

From a formal point of view, assessment is then a mapping from an ordered set of abstract attributes to the grade distribution of the student population. Such a grade mapping is determined by numerous interlinked factors, including the nature of the material being assessed, the mode of assessment, the selection of students, as well as historical and institutional factors.

We define declining standards to be a change over time in a given educational system where the grade mapping gives higher grades to those at fixed levels in the taxonomy. Equivalently, decline means that a lower level in the taxonomy is required to attain a fixed grade. In such a situation, the mean of the grade distribution would naturally increase. Reversing the argument does not work directly, however, since grade increase on its own does not necessarily imply declining standards. It could result from a variety of factors, for example better teaching, higher selectivity of students etc.

Indeed, it is precisely this issue that is the main point of contention, if not controversy, in the debate about declining educational standards. When is grade increase a symptom of grade inflation (i.e. declining standards) and when is it a sign of higher student attainment? In the absence of other comparative metrics of performance, how can the former be distinguished from the latter?

In the paper I come to the conclusion that if increasing mean grade is accompanied by decreasing skewness, then we have strong evidence of declining standards. In particular, I consider second and third order effects in the grade distribution and what one expects to see during times of declining standards in mass education systems. It is argued that the hierarchical nature of the taxonomy implies that, during times of declining standards, in general those students operating at a higher level in the taxonomy benefit more. That is, one expects to see a non-linear effect in which the grade distribution, aside from having an increasing mean, becomes more negatively skewed. We should witness a depletion of lower grades and an increase in higher grades as grades migrate across the increasing mean.

Furthermore, an advanced decline in standards leads to a second order effect in the form of decreasing standard deviation. This is an artifact of the ceiling effect whereby the top grades cannot increase any further. Such a decrease in standard deviation undermines the whole ethos of assessment as a measure of achievement in educational settings.

Let us for a moment, consider a simple example that illustrates these concepts. If a third level lecturer gives a hint that a particular topic will appear on the end-of-term examination, this information will benefit a student only to the degree to which they are able to take advantage of it. That is, the best students will pick up on it immediately and make a note, the average student may know that a hint has been given but be unclear as to exactly what it refers to, while the weak student, if they are even present, will have little awareness of what has transpired. Thus the better students are more advantaged than their weaker colleagues and, aside from increasing the mean grade in the class, negative skewness will be introduced into the examination grades.

Should the errant lecturer go so far as to show the students the test beforehand, not only will the mean jump, but most of the students will be squeezed into the top grades and become well-nigh indistinguishable. Except, of course, for the poor student who didn’t turn up that day. In any event, the standard deviation will have decreased.

We then turn to grades for the Leaving Certificate Examination from 1992 to 2009 and undergraduate university awards from 1998 to 2008. From the grade data at both levels one finds that the grade distributions display the characteristic pattern of declining standards: increasing grade mean and decreasing grade skewness.

This is a feature of almost all of the most popular subjects of the Leaving Certificate Examination over the period. Interestingly, mathematics has managed to maintain its standard relative to other subjects by these metrics. Perhaps this is a missing argument about reform at second level: the problem is not that mathematics is too hard, but rather that other subjects have become too easy!

The standard deviation of Leaving Certificate awards has remained more or less stable. Thus while it has certainly slipped down the taxonomy, the examination is still able to distinguish between students in a relative sense.

For Honours Bachelor degrees in all seven universities the mean has increased and the skewness has decreased, again the fingerprint of falling standards. In fact, the universities have seen a dramatic period of inflation (namely 1998-2005) at which point in time the mean has stabilized. Could this be the ceiling effect?

Perhaps more worryingly, university grades are found to have decreasing standard deviation – a hallmark of advanced decline in educational standards whereby assessment fails to distinguish between students. Thus, Irish universities appear to be slouching towards a pass/fail system with relative merit residing with institutional reputation rather than award level.

It is hoped that these findings will move the discussion of educational standards forward. We have presented a theoretical model of educational settings which has predictive qualities, as well as tools for a detailed analysis of grade trends against which to test predictions. These tests track changes over time for a given institution and therefore are not subject to the usual difficulties of comparing across institutions. The available data can be analysed from a number of other interesting perspectives within this framework.

For the Irish education system as a whole, at second and third level in particular, the message is clear: there has been a decline in educational standards over the past 15 years. Those who argue that this is not so – that increasing award levels are attributable to student motivation, improvements in teaching or whatever – must now present their case with both a coherent theoretical framework and empirical data to back up their claims. Otherwise, the debate must move on to how best to halt the decline.

Explore posts in the same categories: higher education

Tags: ,

You can comment below, or link to this permanent URL from your own site.

17 Comments on “Guest blog: Has there been a decline in educational standards?”

  1. Peter Lydon Says:

    An interesting and important post worth wider circulation. Though I’m not completely compelled by the argument, I know from experience that declining standards is the case.

    I am unclear what you mean by “Thus while it has certainly slipped down the taxonomy, the examination is still able to distinguish between students in a relative sense.”

    The LC tests the full range of Bloom’s taxonomy and in this sense, declining standards seems to be explained by the ease of the material. This is certainly the case at JC level.

    Can one argue that higher order skills are ‘easier’ if the material is easier and that this produces grade inflation?

    I’m quite bleak about the debate. One would clearly like to see appropriate standards and ones based on grade differences. The proposed reform of the Junior Cycle suggests a pass/merit system. Instead of a system that would differentiate on ability, we are moving to entrench a system of ‘one-size-fits-all’ where all pupils need to be confident of a grade to reward effort at *whatever* level the student happens to be. Simply, a ‘D’ student will be seen to perform as well as a ‘A’ student…hence suggestions last week that students could get grades for staring in the school play!

    This of course will feed up the chain to LC and university level. But then, isn’t it government policy to increase the number of students at third level – a policy that to some extent ignores ability and standards?

    On the issue of Mathematics, much of the debate missed the most basic point; students do not spend time on Maths because it is relatively harder and they don’t *need* it. The simplest way to increase the number of students taking Higher Maths is to require maths and to either remove the 7th subject or require 7 subjects for University entrance.

    • Brendan Guilfoyle Says:

      By “Thus while it has certainly slipped down the taxonomy, the examination is still able to distinguish between students in a relative sense” I meant just that the grades have maintained their spread, as measured by the standard deviation.


  2. I’m not sure that I buy Brendan’s argument. Grades are rather arbitrary discriminating mechanisms, and so we shouldn’t get too essentialist about them, or worry too much about how the percentage of students obtaining a particular grade changes over the years. For instance, a first honours in the Irish grading system is 70%+ but this doesn’t imply that our top students are ‘C’ students (given that 70%+ is only a C in the US system). Likewise, I’m not sure that it’s useful to equate grade inflation with declining standards, since another way of looking at it is that the meaning of the grades changes over time (and maybe for good reason given the dominance of the US university system in the global context). And is it not more important, or at least as important, to examine grade distributions across space (i.e. international comparisons) rather than time (historical comparisons in one jurisdiction).

    Second, a decreasing skewness in the grade distribution could be because the teaching of weaker students has improved, through for instance targeted initiatives, and not because of ‘declining standards’.

    Finally, I routinely use ‘seen’ examinations, and I’ve no evidence that the standard deviation or the mean is much different from ‘unseen’ examinations. Indeed my hunch is that the mean is lower for ‘seen’ examinations (though it’s difficult to run proper experiments to test that hypothesis).

    • Brendan Guilfoyle Says:

      To “examine grade distributions across space (i.e. international comparisons)” is much more difficult because of the many factors involved (selectivity of students, types and stratifications of awards etc) If someone wants to try to undertake such a study, it might be worthwhile.

      In my paper I discussed different possible grade changes and what they could represent. Targeted initiatives was one example – it tends to *increase* skewness (all definitions are in the paper).

  3. jfryar Says:

    Firstly, I think this is an extremely interesting and valuable contribution to the debate. However, I would raise one point.

    One of the differences between the secondary and tertiary system is the movement of students within the system. Failure rates within third-level degrees are typically around the 25% mark, with up to 40% being reported in some degrees. Is it not the case then that, in first year, examinations will discriminate between strong, average, and weak candidates but second year evaluations will typically discriminate between strong and average candidates since the weaker students will have dropped out. A change in failure or dropout rates, one might imagine, could affect the statistics.

    We have also seen the creation of subset programmes since 1998. These ‘daughter’ programmes allow students to choose from a wider range of subject material and, hence, students may choose a subset matching their particular aptitudes. A physics degree might have attracted 50 students but now, those students will be distributed between say, physics, physics with biomedical sciences, and physics with astronomy (to take an example from DCU). One might expect that the choices made will reflect an aptitude or interest.

    We have also seen the creation of ‘common entry’ programmes and a greater number of optional modules. This, one might expect, would filter students by aptitude or interest in a way that perhaps did not exist before their creation.

    There has been a change in attitudes to third-level. DCU has operated a programme for many years whereby students entering one degree programme can migrate to another after completion of their first year. There has been a change in number of mature students who, typically, perform better than their Leaving Cert peers and a marked increase in the numbers of female students, who typically achieve well on the LC and are entering non-traditional programmes like engineering and physics in greater numbers.

    So, for me, issues such as those above are important and may have a direct impact on the results of such statistical analysis. Were such factors accounted for in the study?

    • Brendan Guilfoyle Says:

      The lack of available data on failure rates is unsatisfactory – in the paper I call it the dark matter of the Irish University system.

      Should this data ever become available it should be incorporated into the analysis and, if it were to significantly shift the data, I would certainly re-examine my conclusions.

      Indeed, it could well be that the Universities have fought a heroic effort to maintain standards by failing 1st years in larger and larger proportions. It might not make for good educational policy nor be cost effective nor, indeed, be fair to students, but at least it would demonstrate the institutions’ adherence to standards. Somehow, I doubt it though.

      Finally, internal factors specific to DCU may well have played a part in the grade drift there. But since the changes have occurred across all seven Universities (and IoT’s), the data suggests more common elements.

      Isolating and identifying these common factors is tricky but possible. In earlier papers, my colleagues and I have attempted to do so.

  4. Eugene Gath Says:

    Having been involved with delivering mathematics at third level since 1990 (and for a brief while in 1981-82), I have seen this decline first hand. One tends to think (or maybe hope) that it has bottomed out, but year after year students surprise us with even lower standards. Twenty years ago, most first year maths students (including engineers, scientists etc.) could perform a few-line algebraic manipulation without error. This is becoming a rarity now. I would suggest there are two contributory factors to the declineinn standards at third level -first and foremost the decline at second level, and second the “massification” of third level education that took place from the early 1990s until recently. While I fully support the maximum possible inclusion in third level education, I think the concomitant reduction in standards went far beyond what one could attribute to this factor alone.

    • Jilly Says:

      one possible reason for the point you raise in your last sentence is that along with the rise in absolute numbers the massification of higher education was not matched by an equivalent rise in resources, particularly teaching staff. So you now have the really, undoubtedly talented students being taught in large classes and with declining standards of facilities (everything from high-level technology to books in the library). That is likely to have reduced those students’ absolute levels of achievement.

    • jfryar Says:

      Eugene, I agree. But I can’t help but feel that third-level institutions must take part of the blame.

      Most science and engineering courses do not require higher level maths – the ‘minimum entry requirements’ are typically an HD3 or anywhere between and OB3 and OC3 depending on university. Now these higher and ordinary level grades are not equivalent in terms of points, so we have a two tier system. I simply do not understand how a ‘minimum entry requirement’ can be represented by two totally different points values (45 in the case of an HD3, 20 in the case of an OC3). Why do ordinary level students require less points to meet the requirement?

      The answer is simple. About 96% of students taking the higher level course will meet the HD3 grade. About 67% of students taking the ordinary level course make an OC3 grade. By setting the minimium entry requirements at such levels, universities maximise their student intake.

      We have created a circular system. Universities do not tighten up minimum entry requirements for maths to ensure people apply for their courses (this is partially due to the free fees scheme). In doing so, students realise they do not need to perform well in maths, can take ordinary level maths, and can still make CAO points totals with their other subjects. And so most students take ordinary level maths, which ensures universities do not tighten up their requirements for fear of decreasing their student intake.

  5. Al Says:

    Great post.
    Thanks for this.
    I have a question in relation to this:

    “The key feature is that it is hierarchical: the scale of activity has an ordering that is cumulative. Thus a lower activity must be mastered in order to advance to the next level of activity. Conversely, those who have mastered the higher levels find it easier to perform lower activities.”

    This makes sense, and the NFQ is an example of this hierarchical progression. My question relates to how the leaving cert interacts with this hierarchy, especially in the way a leaving cert graduate can ‘catapult’ into a Level 8 programme.
    Looking at it in this way I have to ask where the ‘lower’ development occurs in the interaction between the NFQ and the leaving cert?

    • Brendan Guilfoyle Says:

      Good point – as far as I know there is no official taxonomy for the LCE. In practice, it is probably stratified within each subject (easy topics/questions versus hard topics/questions within the syllabi). Whether what is happening can be halted by a more formal establishment of such a taxonomy is debatable.

      • Al Says:

        Thanks for the reply.
        I think this question is or greater importance than the original one raised of grade inflation/ educational standards.

        Many of the complaints about the system or systems is that skills havent been developed by the learners/ leavers!

        The NFQ offers a taxonomy, but we need to look at how a person can skip the first 7 levels and enter into an honours degree.

        I am not saying that the NFQ offers the solution, more that there is a discrepancy here, and an analysis of this discrepancy may address some of the shortcomings presently highlighted.


  6. It is great to see this work being done. Your use of the 3rd assumption to show that the decrease in skewness is not likely to be caused by increasing standards is very interesting (I hope that I’m interpreting that correctly).

    I don’t mean to shift the goalposts, but many of us who instinctively agree that it is happening do wonder if this issue is important or not. It certainly does reduce precision (but isn’t that why we invented the + and – suffixes?). There is not a great need to compare grades over longer periods of time (Those of us who only got 5 honours in the seventies just have to get over it) – we only need to be able to discriminate between students that are very close in time. Perhaps there is a very strong argument for grading to the curve.


  7. I’m curious about the fact that grades measure what students do, but our discussion leans so easily to what students are. There are strong-average-weak ranking terms used in both this very interesting post and the comments that suggest wide use of a kind of common sense sorting of students into ability cohorts, even if these are initially predicted by grades — so a student who receives As becomes an A student, and that’s the moment the slip occurs (I’m hearing Sam Cooke at this point.)

    This is important in practice if we then secure behavioural attributes to “best”, “average” and “weak if they show up at all” students, and especially if we teach and grade accordingly. But if we’re doing things well, then these behaviours may or may not correlate to grade outcomes. Behavioural sorting in relation to actual engagement practices could even reverse the order: “students who pay close attention in class because otherwise they will really struggle,” “students who have good days and bad days in terms of attention” and “very bright unmotivated non-attenders who are still smart enough to pull off something interesting in the end.” I can easily imagine weaker results going to students with better habits; and stellar results going to students who showed up at the last minute.

    This isn’t a criticism of the study at all, but a question about how we talk about students in fairly essentialist terms when we’re trying to nail our own essential standards.

  8. Eugene Gath Says:

    University of Limerick has always required an HC3 minimum for all Engineering courses, Mathematical Sciences, Financial Maths etc. Indeed we still offer bonus points for honours maths and as far as I know we are still the only ones that do so. I agree that letting students in with below this level of accomplishment is setting a nigh impossible task for them and is probably wantonly wasting a year or two of their lives.


  9. Hi,
    Having gone through the process recently, I was left underwhelmed by the standard of teaching or maybe it is the curriculum, but the standards have definitely dropped on all fronts.
    Thus I can’t agree with…. “mathematics has managed to maintain its standard relative to other subjects by these metrics. ”

    As many think, the leaving cert is due a large shake up, and the sooner the better, as students should be “learning” not just cramming info for one exam then they forget it straight after..


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: