fbpx

What kind of analytics does a school or MAT leader need?

By Joshua Perry,

We’ve asked Joshua Perry, education technology expert and entrepreneur, to write a series of blogs about analytics and assessment. The first instalment examines why we bother with analysis in the first place, and the second discusses analytics for classroom teachers.

This is the third instalment, which looks the type of data a school or MAT leader needs. Joshua is on Twitter as @bringmoredata

Analytics for school leaders and Multi Academy Trust (MAT) leaders serves a different purpose from analytics for classroom teachers, and I don’t think those differences are always clearly understood. So in this blog I’ll try to set out some important delineations between the two. 

First, I should explain that I’m lumping together middle leaders, senior leaders and MAT central teams in one crude category of “leadership”. Clearly each of these audiences has different needs; but I think their needs are at least somewhat comparable, in that they’re interested in aggregated analysis comparing demographics / classes / subjects / schools. Still, where there are important distinctions between the groups I’ll make that clear.

As I explained in my previous post, I think the starting point for any analysis needs to be: what decisions can data usefully influence? There’s no point spending valuable time gathering and analysing data if you’re not going to make more informed decisions as a consequence. In the case of leadership, the key questions might be things like:

  1. Are there in-school / inter-school variations in performance that need to be addressed?
  2. Am I on track in relation to relevant targets?
  3. Are my staff and students happy and healthy?
  4. Are there pastoral problems that require management attention (e.g patterns of low attendance; exclusions spikes)?

All of these matter, but my main focus in this blog will be questions 1 & 2, which relate to academic assessment. 

Are there in-school / inter-school variations in performance that need to be addressed?

This is a biggie, as we know that In-School Variation (ISV) is a huge contributing factor to our system-wide outcomes. Mike Treadaway’s helpful blog on this subject for FFT’s Datalab last year concluded by stating:

“Within-school variation is important, and reducing it could have a significant impact on overall national standards of attainment.”

– Mike Treadaway, ‘Looking within, part 1: How much difference does within-school variation make?’

That’s a big statement: systemic change is incredibly hard to bring about, so anything that can lead to that will of course have meaning at the school-level too. Treadaway’s series of blogs on this subject also had some important insights into what kind of insights matter most. He concludes that:

  • There are wide variations within schools in subject performance (part 2). 
  • There are significant variations between schools when pupil characteristics are considered, but only small variations within schools (part 3).

This tells us that school leaders can meaningfully influence overall results by looking at the variation in performance between subjects and working out why those outcomes vary. In other words, if a school excels at biology but struggles with history, why is that? 

Treadaway’s analysis also tells us that MAT leaders may get more from characteristic analysis than school leaders since the variation is likely to be greater between schools. I discussed in my previous blog that I’m wary of class teachers overusing group analysis, and the same can be true of school and MAT leaders; but at the same time, providing the right groups are being compared, there’s definitely value to be had when the group analysis gets broader. For example, in a MAT it’s profoundly helpful to know which school gets the best results with EAL students, both in specific subjects and across the board: perhaps that school can lead a network-wide training day on the subject.

I also think MATs can play a much greater role in analysing variations between subjects – and even within subjects. Few MATs can afford full-time network leads for every area of the curriculum, and the network’s subject strengths and weaknesses must surely be an important input into resourcing decisions. Then, at the more granular level, if I was a network lead for maths, I’d want to know whether my schools were better at teaching fractions or geometry, so I could plan professional development and curriculum materials accordingly.

“Technology has a vital role to play here: this kind of analysis is incredibly hard without smart systems to do the crunching.”

Am I on track in relation to relevant targets?

This has traditionally been seen through the lens of “Will Ofsted be happy?” Clearly this matters on some level – and those that say it doesn’t have presumably never tried getting a new headteacher job off the back of an “Inadequate” rating. At the same time, generating analysis explicitly for Ofsted is thankfully becoming a thing of the past. It doesn’t take long to look at the data that Ofsted care about, because under the new inspection framework they don’t look at internal data any more. So all you have to do is get to grips with your public data and you’re sorted on that front.

This frees you up to think through the question of whether you’re on track in relation to targets that you decide are important. This can still be a bit of a minefield – for example, plenty of schools try to calculate Progress 8 in advance of the formal DfE data release, or model similar metrics for earlier year groups. Frankly, a lot of the analysis that starts this way ends up being shaky, as Progress 8 isn’t designed to be a measure that can be replicated prior to receiving a national set of Key Stage 4 grades. So what are the alternatives when it comes to progress? Well, one option is to use standardised assessments and track the variation in average percentile for a given subject over time. The beauty of this kind of approach is it marries reliability (standardised assessments have an objective quality that is particularly helpful when comparing schools within a MAT) with simplicity (if your average percentile rank goes up, you’ve added value). Another common approach is to compare current performance to expectations derived from prior attainment. Some more advanced networks are even writing their own assessments and standardising them with reference to prior attainment – Rich Davies has written a fascinating explanation of the process used by Ark Schools here.  

When considering performance against targets, the question of whether to incorporate such data into performance management frequently comes up. The government’s Making data work report from 2018 was pretty unequivocal on this, stating:

“Objectives and performance management discussions should not be based on teacher-generated data and predictions”.

– UK GOV, ‘Making data work’

I’m sympathetic to this view – not least because the reliability of school data is inevitably compromised if you tell the person administering an assessment (i.e. the class teacher) that their career progression is based on how those assessments go. If a teacher sees the test in advance, they’ll be more likely to teach to it, and they may also be tempted to make the test conditions easier (e.g. allowing a bit more time than the guidelines state).

A related minefield stems from the temptation of leaders to want to see ever more granular data – including aggregations of teachers’ frequent formative assessments. I think this is a mistake: a good summative assessment is useful for leaders while also having formative value for the class teacher, but a formative assessment designed for responsive teaching needs to stay in the classroom. Leaders should understand that once teachers know they’re seeing data too, the test’s purpose is changed, and that can skew behaviours. Or, to put it another way, if you want your teachers to be using formative assessments to spot gaps in learning, don’t incentivise them to set easier tests where everyone gets 100% just to impress you. There are caveats – I can see the value in collecting some metadata on formative assessment use (e.g. % of students logged in to a system over the past week), providing the data collected doesn’t distort behaviour.  

“I think summative assessments are underused by leaders beyond the summary grade.”

In contrast, I think summative assessments are underused by leaders beyond the summary grade. It’s mostly a technology issue – few MAT Directors of Education would tell you they don’t want to know performance broken down by curriculum strand, for example – but vendor systems have rarely been able to handle this kind of complexity. However, that’s changing, and I’m optimistic for the future – analytics is becoming a real focus of R&D expenditure within education. Indeed, Renaissance has shared its plans for MAT-level analysis with me and they’re exciting, so watch this space!

Finally, it’s important to remember that while data can help to pose questions and point to possible answers, a pretty report is never the end of the investigative process. A good leader takes an issue that has been highlighted by data and then discusses the specific context with their team until things are properly understood and a course of action can be defined.

 

Joshua’s blog series can be read here. To see how we’re supporting students and teachers during school closures, click here. You can follow Joshua on Twitter on @bringmoredata and Renaissance on @RenLearnUK


Joshua Perry



Monthly newsletter


Social