Raw results are worthless
Another set of assessments will not help us to judge our education system, says Roger Murphy
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The introduction of "baseline assessments" for 5-year-old pupils during their first term in primary schooling will add yet another layer in 1998 to a complex and elaborate system of national assessments. The national curriculum has already established the routine assessment of 7, 11 and 14-year-olds. An increasing proportion of young people now remain in education and are assessed again at 16 and 18. All this adds up to a considerable assessment industry, costing millions of pounds, and churning out a huge volume of results each year.
One can legitimately question whether we are going to be better off with all this assessment information. Presumably there will be some benefits for individual children, their parents and teachers in getting some standardised feedback on their progress. However, one has to weigh that against the inevitable traumas of formalised assessment procedures, the time, the cost, and the danger of a narrowing of educational provision to "teach to the tests".
The biggest concern I have is in relation to the way results are likely to be summarised and reported to the nation. It has already become something of a ritual for GCSE, A-level and more recently GNVQ results to be released in August and then used as the basis for a fairly mindless debate about whether they show that the examination papers are getting easier, or that standards in schools are improving.
All of this is then accompanied by a rash of league tables, purporting to show how schools have performed in relation to each other. Despite the repeated cries of all sorts of authority figures, pointing out that such league tables reveal nothing about the effectiveness of those schools unless their pupil intake is comparable, the fun continues!
If an expansion of the number of layers of assessment is going to lead to an increase in the number of times each year that the media are going to report useless aggregations of national assessment results then we are really heading downhill fast. The reality is that no set of national assessment results is going to tell us very much unless they are analysed and interpreted in relation to other data, which are needed to make sense of the complexities that are contained within them.
In order to move things on I would like to put forward two proposals, which I hope will increase the likelihood of an informed use of national assessment results.
The first of these is to call for the establishment of a national database, which would allow accredited educational researchers and analysts access to national assessment results not just on a year-by-year basis, but also allowing cross referencing to the achievements of the same students at other stages, and to background data such as gender, social class etc, which are all known to be relevant factors in interpreting assessment results.
The DfEE and SCAA have already taken some steps towards developing such a database for their own purposes, but access needs to be assured for a wider range of users and the potential users need to have some say in deciding what sort of data needs to be available and how it needs to be configured.
The second proposal is an insistence that a government health warning should be printed alongside any league table or simplistic analysis of raw assessment results.
The scope and the exact formulation of this warning will need to involve some refinement but it should encompass a requirement that:
(a) All league tables of schools based upon raw assessment results should point out that no meaningful comparisons can be drawn about the quality or effectiveness of the schools unless their results are interpreted in relation to information about the characteristics and prior achievement of pupils entering them.
(b) Comparisons made between the achievements of pupils in different subjects are likely to be severely hampered by the impossibility of equating assessment results across subjects.
(c) Results in national assessments in any one year depend partly on the demographic characteristics of that year group, and may vary from results in preceding years because of demographic as well as educational reasons.
Much as people would like national assessments to operate as some kind of educational barometer, they can never be that. Assessing educational achievement is much more complex than measuring rainfall levels or average temperatures. What is needed in future are more "value-added" analyses of results, which have the potential to enlighten us about real progress and achievements, and less league tables of raw results, which are rarely worth the paper they are printed on.
Professor Roger Murphy is Dean of the Faculty of Education at the University of Nottingham and President of the British Educational Research Association. This article is based on a paper to be delivered at the BERA Annual Conference at Lancaster University tomorrow (13 September).
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments