Some work I have been doing has resurfaced thoughts about this. The stock phrase that is metered out by critics of today’s young people is that standards are lower these days, exams are easier. They must be because there is a year on year rise in the number of top grades achieved. Now there is much more to this than can be discussed in one blog post so this one will focus only on the ways grade boundaries have been determined within GCE A levels, the most common qualification for university entry in England.
Norm referencing: When GCE A levels were introduced in 1951 they were initially graded as pass or distinction. This moved on in 1963 to a 5 point, norm referenced scale. This means that certain percentages of the papers were given particular grades. The top 10% got an A, the next 15% a B, 10% C, 15% D, 20% E and the remaining 20% got an O level pass. Norm referencing works on an assumption that the number of candidates is large enough to ensure that standards are unlikely to fluctuate much each year, that the intelligence of the school population remains largely fixed. Thus the number of grade A candidates was capped. There could be as little as 8 marks between the grade boundaries depending on how candidates responded to the paper.
Criterion referencing: in 1987 a system of criterion referencing was introduced and this remained in place until the Curriculum 2000 reforms. The AS level as half an A level (work assessed was at A level but only half the content) was introduced in 1989 but was not popular. Examiners produced criteria for what a B grade and an E grade candidate “”looked like” (for want of a better term…) and the intermediate grades were determined by equal intervals between the 2 criteria. The O level pass achievement was replaced by a grade N for ‘nearly passed’. Criterion referencing was a reasonably successful system in A level chemistry, criticism of it came mainly from the humanities and arts subjects. The driving test is an example of a criterion referenced assessment.
‘Soft’ criterion referencing: A levels are now neither criterion nor norm referenced, this system has been in place since the introduction of Curriculum 2000. To a certain extent they are some way between the two. Anecdotally there is a certain amount of predictability about the papers which most likely contributes to the grade boundaries (at least for chemistry) remaining around the same ball park figure each year. Grade boundaries are determined by a committee with use statistical evidence and sample scripts forming the basis of the discussion. The UMS (Uniform Marking Scale) allows for any swings in cohort performance to be adjusted by scaling the marks. This was particularly obvious in the introduction of the ISA practical exam where marks were scaled to such an extent that as few as 2 marks lay between each grade boundary. The AS exam is examined at a lower standard of challenge to the A level and until the most recent incarnation has been counted as part of the overall grade at A level. Resits between 2002 and 2014 were common.
Since A level is to a certain extent a university entrance exam, what do universities use? From what I have experienced a soft referencing system tending more towards norm referencing than criterion but having aspects of both seems to be most common. There is fare greater variability in papers in HE and candidate reposes can also vary more widely. Committees are also used for the process, in a similar way to A levels.
So what can be taken from this…….? Firstly that a grade A from a 1986 A level cannot be fairly compared with a grade A from a 1996 A level, quite apart from any potential differences in curricula, they were awarded under quite different regimes. Also the level of political interference in the A level has made it quite difficult for the lay person to get a good feel for the trends.
More to come on this theme later…