Is There Life After Levels?
On the first day of the term when yet another new National Curriculum began, the BBC TV news chose to trumpet the apparently newsworthy shock that ‘secondary pupils must now study 2 Shakespeare plays’. If only that was the major challenge. The reality, of course, is that you’d be hard pressed to find a secondary school anywhere in the country where this has not been happening for many years.
As is so often the case with reporting of education, all the news media missed the single biggest point: the removal of a national system of ‘leveling’ to track pupil progress. But it’s true; Levels are dead meat. The current system used to report children’s attainment and progress will be removed. It will not be replaced. A June 2013 press release from the DfE explained the reasons for this decision:
- Research shows that, far from raising standards, Levels are actually impeding progress.
- Levels are complicated and difficult to understand, especially for parents.
- They have encouraged teachers to focus on a pupil’s current level, rather than consider more broadly what the pupil can actually do.
- Prescribing a single detailed approach to assessment does not fit with the curriculum freedoms the government is now giving schools.
This was, of course, about as tyre-screeching a U-turn as it was possible to make. For over a decade the orthodoxy has been that the only way to ‘drive up standards’ is to collect vast hard drives full of data on every aspect of pupil performance. This could then – at best – be used to forensically pick over the delicate development of learning in young brains and devise suitable learning schemes to take them forward. At worst, it was used to ‘prove’ to Ofsted that all was rosy in the curricular garden. In general, the whole Levels edifice became more and more concerned with measuring one institution against another (and indeed colleagues within the same department), rather than being about genuinely helping pupils to get better at what they were doing.
Such a system always sat uncomfortably with a subject like English. In their hearts, all good English teachers have always known that progress is mercurial, developed recursively over periods longer than those generally demanded by input to the data monster. How do pupils make progress in our subject? In half an hour? Do they just enter our classroom unable to ‘identify the formal and stylistic features of a persuasive letter ‘ and rush out less than an hour later, able to spot them at 100 metres in poor visibility and pepper their subsequent letter writing with them? What marks the point at which pupils move from ‘understanding some of the ways texts reflect social, cultural and historical contexts in which they were written’ to the next ‘level’ where they can ‘analyse the ways texts reflect’ this? When can their teacher safely e-tick that box as ‘done’? These are just two examples of indicators I have seen being used in tracking systems. Arguably the greatest casualty of the crude leveling era has been its impact on pupils’ writing. As Professor John Keen has observed:
For complex learning outcomes like writing, there are few pre-determined stages or next steps, only exploration of different possibilities for development.
It would be nice if progress in English were as simple as moving from 4a to 4b. Sadly we know it isn’t.
But surely, you might say, the current system has worked? Standards have risen, haven’t they? Well that depends on whose data set you use. The reason the government has lurched from one extreme to the other is that we have inexorably slipped behind ever more countries in the league tables that compare international performance. None of our competitors assess their students anything like as regularly or in such minute detail as we do. They appear to have grasped the point made so eloquently by Sir Ken Robinson in Out of Our Minds:
Instead of … tests being an indicator of how (pupils) are progressing, they are like continually pulling up a plant to see how well it is growing.
So for once, a government reform genuinely offers the chance to reclaim professional control over a crucial part of our work: helping each individual pupil improve their skills in English. We can move away from a restrictive, top down approach where the focus has been on target-led, objective-focused approaches in which students are measured against lists and teachers teach to them. In an unpublished paper in 2012, Barbara Bleiman, Co-Director of The English and Media Centre, claimed this had led to:
…a much less ‘responsive’ curriculum and pedagogy than is desirable. A responsive pedagogy is one where the needs of individuals can be addressed and where teachers are encouraged to develop their own judgment about what will make the most difference for their own particular groups and for individuals.
Nobody is suggesting that we do not use carefully considered information to assess where pupils need to go next to improve their knowledge, skills and understanding. But we need to design manageable systems that primarily seek to help individual pupils. In Inside the Black Box in 1998, Paul Black and Dylan Wiliam wrote:
The ultimate user of assessment information that is elicited in order to improve learning is the pupil.
That now needs to be the mission statement guiding our own approaches to monitoring pupil progress.
Whatever we come up with must be simpler than the system being replaced. I recently saw a document put out by a respected national organization, in an attempt to help its members enter the new world, which proposed 263 different criteria for assessment, each to be judged on a 9 band scale. This gives 2,367 different points at which pupil performance could be ‘ticked’. That way stark raving madness lies.
So as we begin at what feels eerily like Year Zero in terms of assessment systems, what we come up with must have the minimum number of criteria needed to map what pupils must know, understand and do. And above all, we need to find the time and space to talk and write responses to pupils on their own individual performance rather than deluging them with ultimately meaningless numbers. All indications are that it is that – let’s call it good teaching – which will drive up standards.