It cannot have escaped your attention that the person most likely to be in charge of a school’s data is someone who likes spreadsheets. And with this being the case, you’ve probably also noticed that the other people who talk about data, dancing excitedly around the office with a new request (to which, of course, they already know the answer), also like spreadsheets. These are the Excel Warriors, hewn from hyperlinked formulas, conditional formatting and years spent trawling support.office.com. They’re also the first people who know how to create a SIMS letter containing every behavioural incident, level and change of hair colour for Rashida Radish in just 900 clicks. These people have incredibly useful skills, and they perform a job which many of us are at least not interested in, if at all capable of.
But: collating – and then homogenising and presenting – whole-school data for a particular audience does not denote comprehension. Even the most well-designed spreadsheet, resplendent in RAG and layered with various malleable formulae, only implies certain characteristics. These are then often dangerously misinterpreted by either said data managers or various other interested parties who can, and do, manipulate that which is presented before us for their own means, whether benign or otherwise.
Now, this is always going to happen to some extent. But what if those data managers were to actually understand something about the data’s origins? For example, it’s taken as read in most schools – I hope – that the data for MFL at KS3 will look much worse than other subjects because so few children studied French, or whatever, at anything other than a most basic, au revoir-bonjour level in primary schools. MFL teachers complain, understandably, but any sensible leadership types do not for a moment ask stupid questions like, “None of your Y7s are above L2, despite 85% of them having an end of year target of L5 or above, and yet in English 95% are at L5 – what’s going on?” Actually, I know these conversations do happen, but hey – WTF, yeah?
So that’s one thing. Humanities and PE teachers also complain that their targets are based on English and Maths and that means not-a-lot when you’re trying to teach Jamie to catch a ball. And what about the arts? Well, exactly. A panoply of misunderstood data within schools is bad enough. What if national data is also homogenised and presented poorly? Hello KS2 results! David Didau, Steve Adcock and Tom Sherrington have written here, here, here and here in the last few days about whole-school data, zero-sum games and the danger of throwing babies out with bathwater. And yeah, these are problems which we all encounter and are familiar with, so I’m not really saying anything new so far.
What I would like to see, then, is a better understanding of data at subject level, better communication between data managers and subject leaders and a recognition of the limits of comparing chalk and cheese. I, frankly, rejoiced at the end of levels, but the last thing I’d want would be someone with no understanding of how history works telling me that my data must be collected and presented in a catch-all-subjects spreadsheet, complete with a school-wide, parent friendly scale. Because that wouldn’t make sense: it’s one thing to look at GCSE assessment objectives – which in history are always the same, just with slightly different weightings – but it’s another to put all subjects on the same scale for ease of analysis and departmental comparison. Why? Because, just as subjects are different so it follows that assessments also differ, and thus comparing two might actually muddy the waters for comparison even further.
Take English and history: pretty similar, no? Both require knowledge, of history or texts; both require analysis and evaluation; both require good levels of extended written English. And yet it is the very knowledge required in each subject which will make the assessment requirements that much different: whilst an English teacher might well, and should, teach a little of dustbowl, depressed 1930s America in context before reading Of Mice and Men, a history teacher will instead look the causes and changes during the 1930s when considering the same period, perhaps pausing to nod at Steinbeck as an interpretation. And thus whilst the assessment objectives might appear similar for these examples what’s actually required of students is very different: one might be an analysis of the relationship between the principal characters placed in historical context with the weighting heavily tilted towards textual analysis; the other might require students to make inferences from a conversation between those characters about the period for a particular social class. These are not two sides of the same coin, but two different coins: the procedural knowledge – or skills, if you must – might appear similar, yes. But these skills originate from different domains and so are performed differently, using distinct knowledge and methods of application.
And so, to the crux of the matter: does your data manager understand these differences? At a surface level, probably. But enough to present the data in context of itself only? Because sitting the MFL data for the first term next to maths probably won’t look great, and that can lead to all sorts of gloating, sneering and gossip – and don’t tell me that doesn’t happen.
Now, if the answer to the above question is a fat No with a capital “N” then that probably isn’t that person’s fault. Not all data managers are, or were, teachers: some are the guy who does cover, whilst others are whoever’s around in admin at the time. One school I worked in didn’t replace their data guy, instead shoving the role onto the pastoral lead’s plate just because, and I’m quoting now, “Well, he’s good at Excel isn’t he? He’s an IT teacher. It’ll be easy.” So what am I arguing for? Better communication and a recognition that homogenising data for the ease of presentation can only lead to poor questions being asked, both in terms of accountability and of what we can do better for our students. I don’t think that’s too much to ask.