This week I ‘fed back’ for an hour to a Y11 class on a Germany PPE. I told them that I wasn’t going to give them any grades until they’d had the feedback, that by the end they’d realise how pointless that knowledge might be anyway, that I’d only give them scores, not letters, and only if they specifically asked for them, and that since everyone in every history class has a target of 100% – whatever their reports might say – we should all focus on all being the best.

“So why did we do the test, then?”, one asked. “I want to know my score.”

“It’s about learning from mistakes in a pressurised environment. You can all follow instructions in here; you can all answer my questions when you’re sat in front of me, but you have to do it alone. It’s practice for you. By the end you shouldn’t need to know, but I’ll tell you if you really want. My questions to you are, why do you want to know, and how will that grade help you?”


For this session I’d completed the PPE myself and handed it to the class. We went through each question, looking at the phrasing of the question, what exactly candidates were being asked to do, the weighting of the history to the mechanics of answering the bloody thing, timing and similarities and differences between each question. I don’t think that’s anything special, really, but it is incredibly useful for them. I do the same before PPEs, too: a kind of pre-brief, although not with the actual paper. Again, though, nothing special.

All of this is focussed on helping children keep calm when they know the subject, so when an unusually phrased question, for example, pops up they can breathe easy knowing that it’s a seven-marker, meaning x,y,z: “Count to seven on your fingers: the two on the other hand mean two paragraphs.” Nothing special.


What do I get from this? Writing out the answers is useful, as is watching them spot – or fail to spot – my turns of phrase which help me gain extra marks; it’s useful as it’s focussed on what is right, not what I did wrong and how shit I am at exams, which helps me keep a positive atmosphere. But the data? The numbers? What do they mean?

Let’s take a QLA. I punch a load of numbers into a big, colour-coded, condition-laden spreadsheet in the hope that where the red sea swamps the green hills I can at least understand something about both individual and group performance on single questions, types of questions, timing, strength of knowledge, technique, amount of revision, what they ate for breakfast, etcetera.

But I can’t. Not really. I might have some ideas, but – actually – I probably know what’s gone wrong – or right, although I think that’s harder (and this is where comparative judgement should be brilliant) – as I’m marking each essay. The spreadsheet is a problem: it’s loaded with too much information and expectation. And because we’re time challenged (a phrase I very deliberately choose because of its utter inability to convey just how little time a teacher has to do anything effectively) every year, we look for, and find, shortcuts which don’t actually exist. And then (!) we over-estimate our own ability to draw inferences from these numbers because a) COLOURS and b) someone else agrees BECAUSE OF COURSE THEY AGREE THEY DON’T UNDERSTAND EITHER.

Right. And that’s why reading children’s work is vital, and why writing pen-loads of comments all over half a page of writing, drawing inferences from the most over-stretched (sorry, *creative) parts of our tired, flaccid brains is a time-consuming waste of consumed time.

I don’t know about you (although I’m now going to assume I do, because I genuinely assume this to be the case) but I can predict 90% of the mistakes a class will make on an exam. Because I taught them. I don’t need to write comments for that 90%.

It’s the other 10% of problems which I’m looking for, and that’s why I assess – to hunt the problems. But, and here is the thing – no, really: this is it – I have to know what I wanted in the first place to make sure I’m hunting in the right place. If I look for data to tell me everything then I might well end up knowing nothing, lost as I’ll find myself in a noise of competing, shouty numbers.


Rather than asking “What can the data tell us?”, we need to ask “What are we assessing and why? What information will we receive?” In doing so anomalies are clearer and we can focus on something worthwhile, rather than drowning in RAG.

See this from Ben Newmark on finding the expected and unexpected problems.

Advertisements