This post focuses on evidence of marking, but it could be anything really. Anyway, enjoy!

‘The issue that we have, Toby, is that there’s no evidence of your marking in the books for the last six weeks. It’s not since the last essay that you’ve written anything at all.’

‘Well, no – there won’t be. I don’t understand what the issue is.’

‘We need to see that you’ve been marking their books so that there’s evidence that you’re helping them to progress. If we can’t see your comments then how do we know what’s going on in the lesson?’

‘Surely you know what’s going on in the lesson by the quality of what they’ve written? You’ve commented already on how neat their books are and how well they’ve been writing, so I still don’t understand what the issue is.’

‘But you’ve not evidenced that.’ 

‘Isn’t the evidence in the quality of their work? I mean, look at what Rory was writing at the start of the year and look at what he’s writing now – look at the difference. And this is across the board. The evidence is there. If they weren’t getting feedback then they wouldn’t be writing more fluently now than at the start of the year.’

‘But again, Toby, it isn’t clear. You’ve not made clear to me or an Ofsted inspector what you’ve done.’

‘Eh? So it happens by magic, does it? Look, I read their books both during lessons and …’

‘So you could use a verbal feedback stamp to evidence that?’

‘… hang on, that wouldn’t be evidence of the students thinking – that would just be evidence that I had a stamp, wouldn’t it? Anyway, I read their books during lessons and when I’m free. I must do because otherwise I wouldn’t know how to help them improve, which the vast majority have done. There’s your evidence.’

Evidence? What evidence?

In history classrooms a source becomes evidence when we ask a question of it. This, from Cambridge University’s Faculty of History, explains it better than I could:

Any leftover of the past can be considered a source. It might well be a document, and we often think of history as a textual discipline, based on the interpretation of written texts, but it might also be a building, a piece of art or an ephemeral object – a train ticket, say, or perhaps a pair of shoes. These are all ‘sources’ because they all provide us in different ways with information which can add to the sum of our knowledge of the past. Sources only become historical evidence, however, when they are interpreted by the historian to make sense of the past. The answers they provide will very much depend on the sorts of questions historians are asking. For example, a train ticket might be used to provide evidence of migration patterns or of the cost of living at a particular time, but also of broader cultural trends: for many years, for example, it was the practice to print a ‘W’ on a woman’s ticket (this was when stations had women-only waiting rooms and trains had women-only carriages). As for a pair of shoes, it might provide the cultural historian with evidence of changing fashions and consumer tastes, or the social historian with evidence of class differences or production patterns. It all depends on what the historian wants to know. This is why it makes little sense to ask if something is ‘good historical evidence’, without knowing what evidence it’s supposed to provide.

I’d like to apply the same rules to evidence in schools. A teacher’s marking is almost universally agreed to be ‘good evidence’, but for what? How hard they work? How many PPAs they have? Whether they have a life outside school? Whether they – wait for it – mark? Because if that’s what we measure then that’s what we get: the amount of marking is evidence of how much marking is done. Marking might well have an effect and it might not. For the record I’m not sure it does, but that’s another story.

Marking is such an obvious thing to see that it’s used as evidence of a teacher doing their job. But it tells us very little, if anything, about a teacher’s effectiveness or a student’s progress. And why is marking not ‘good evidence’ of a student’s progress over time? I think the answer is so bloody obvious that it pains me to write it, but here we go:

It is the students who make the progress, not the teacher.

Marking is a giant stick with which to beat teachers. And with that being the case we have to ask different, more subtle, questions of exercise books if we want to interpret what we see as evidence of students making progress. For example:

  • Is there a difference between a student’s ability to write coherent arguments since September?
  • Does the student take more, less or the same amount of care in terms of presentation and organisation?
  • Does the student complete tasks?
  • Does the student repeat the same spelling mistakes?
  • If there are comments has the student responded effectively?
  • Is the student using more complex language, whether in terms of terminology or syntax, than in September?
  • Does the student refer to knowledge from previous topics in their current writing?
  • Does the student attempt harder questions as the year progresses?

The answers might help to provide evidence of a student’s progress, but they might not. Take the question If there are comments has the student responded effectively? Well, maybe the teacher has asked a question to provoke uncertainty. If this is the case then what is an effective response? And is the observer in a position to make a judgement? Or is this something which perhaps won’t have an immediate, tangible or visible impact?

What about Does the student attempt harder questions as the year progresses? If Jasmine isn’t doing this then maybe that’s because she’s practising getting the basics right. In this case progress won’t appear to be rapid and sustained, but at least she’s not running before she can walk. On the other hand maybe the teacher does need to intervene: does Jasmine lack confidence? Why does she keep making the same mistakes? What can the school do to help her? Is this what the teacher expected? Has she peaked already? Is there something else going on? Where is she sat? Has she misunderstood the tasks?

Notice none of these questions put the teacher in the spotlight. This is because the questions asked are about the student. But ask about marking and it’s the teacher who’s centre stage. If you look for marking you’ll find some or you won’t and the person responsible for that is the teacher. If you look for progress then the immediate questions focus on more nuanced evidence about challenge and attitude, for example.

Don’t get me wrong, the teacher’s role in setting challenging work, providing support and talking with students about how to improve is paramount. I have little time for those (few, I hope) who let the rest of us down. However, the frequency and quality of marking only offers evidence that the teacher marks: nothing else.

Marking is a poor proxy for effectiveness.

If we’re all now accepting that learning is invisible then we have to also recognise that our input is going to be at least very difficult to see. Thus marking is a poor proxy for judging a teacher’s effectiveness in the classroom. Just like a student’s activity should not be mistaken for achievement marking should not be viewed as evidence of effectiveness.

To really get to know what’s going on in the classroom an observer has to both have access to lots of information and be able to interpret that information in a variety of ways. That information also has to have validity and reliability: I wrote about the dangers of poorly conceived student voice (if not all student-voice, some would argue) – here.

This isn’t really feasible, however, under most school accountability systems and timetables. I know that some schools really do make an effort to capture lots of information but I reckon these are the few. Using marking as evidence is a quick and easy way for time-challenged, or incompetent, observers to make judgments for the purposes of accountability. These judgments, though, are very poor, based as they are on a misunderstanding of what evidence is.

Finally, an accountability system which uses as a central piece of evidence an irrelevant data set potentially wastes time, money and resourcing on unnecessary CPD, pointless meetings and unhelpful bureaucracy.

I was joking earlier on: I do understand what the issue is, but it’s nothing to do with me.

Advertisements