On Thursday night I gave a short talk at Teach Meet Devon, held at Tavistock College. Below is a short summary of my thoughts on testing. I owe a great deal of gratitude to Daisy Christodolou in particular for her superb blogs on assessment.
Testing isn’t evil, but we like to think it is. Tests appear to be these monstrous, dehumanising, homogenising Borg-like Things From Planet Kinderhate!, their child-catching, freedom-sucking slimy, angular arms always looking for more ways to scratch tears into human children’s puppy-dog eyes. We have this weird, innate fear of tests, as if they’re some concrete-block, minimalist Soviet hangover, more Zemyatinian than Orwellian.
There’s the accusation that tests don’t tell us about the whole child, whatever that is, and that they’re artificial, that tests have no relevance in real life because you wouldn’t take a test in real life. Apart from the times when you might. But that’s not the point, is it? Tests don’t show what children can do in context, with real-world problems, like bringing down an international bank, winning an election or surviving a nuclear explosion.
Also, tests are just plain mean, aren’t they (and not just for the children)? I mean, you can’t love children properly if you test them, right? Tests are scary and make little Sufi and Barry cry until they’re sat, forlorn, in big ol’ pools of broken dreams. After all, we know what our children can do and how dare some test come along, with all its ink and questions-to-make-you-think, and dare to know the little unicorn-chasers better than us? Nonsense. Even Mary Bousted of the ATL said that tests disadvantage children from difficult homes, and she’s never said anything silly!
But! There’s an inconvenient truth: teacher assessment is biased, and you can read all about it here. If you can’t click the link because you’ve suddenly contracted leprosy then the essential problem is that knowing our pupils is the problem: rather than failing to see the wood for the trees, we’re consumed by the whole forest and can’t even see the bloody floor.
Also, and lots of people have written about this (so just go Google it, yeah?) apart from teachers being TIRED and looking for EASY ANSWERS (#5minutemarkingplan), how many of us have argued for hours over what’s a 14/20 opposed tgo a 13/20 in the controlled assessment? Because I have. I’ve had lead examiners tell me, in confidence of course, that a CA will probably get an A* when I don’t think it’s worth a D. I mean, what? Who’s right? (Well, I think I am but that’s the problem right there.)
The thing about tests – at least simple, short answer tests – is that they avoid these flaws inherent in teacher assessment. In fact (long sentence warning), a little birdie tells me that that many exams boards would quite happily abandon the hoopla-rubrics for multiple choice questions, or even comparative judgement, if it weren’t for the fact that teachers are so wedded to their own high opinions of their own often terrible ability to assess their own pupils.
One thing we’re introducing across the board at Torquay Academy next year are knowledge organisers. Again, lots has been written about these (see here and here), but one of the main benefits is that these can be used for frequent, low-stakes testing and self-quizzing. The general idea is that a KO will have the skeletal knowledge required for a particular topic. In history I can add timelines, key people, key words and even maps. Maybe in art there’d be a chronology of genres, whilst in PE we might have diagrams of movements.
Ok, fine – but what has this got to to with testing? Well, if each piece of information is numbered then that makes quick tests and self-quizzing very easy. I tend to start most lessons now with a key word test in the original KO order followed by a couple of jumbled versions to make sure those 100%s aren’t a result of knowing the order.
This is such a quick win: the knowledge goes in, children see immediate success and are able to pinpoint what it is they don’t know. I can also easily create MCQs from this KO. Joe Kirby has written a very thorough explanation here, so go read.
But, some subjects need pupils to write, right? And writing, especially in my subject, creates lots of marking and, for the reasons I’ve outlined above, we raelly can’t be sure just how subjective we’re being. So what else can we do?
Having read a bit about comparative judgement recently, and visited Johnny Porter at Michaela School, I’d highly recommend suspending your disbelief and trying this out.
Again, you can read about this here and here, but I’ll outline the basic principle below. Essentially, by comparing two essays side-by-side we, as humans who spend a hell of a lot of time making snap judgements like that bunny looks a safer bet to hang out with than the ravenous tiger, can make very quick and surprisingly accurate judgements about which of the two is better. By doing this a lot, and by getting colleagues involved as well, we end up with a less time-consuming, less-biased and more accurate idea of what our students can do and where they really sit.
This doesn’t alleviate the pain of actually having to write something, but it does make our lives much easier. And the best thing? There’s even a website which promises no more marking!
So, who said testing has to be evil?