June 22, 2021

In our recent blog post, you will have seen an overview of the latest findings from our joint project with the Department for Education (DfE) and Education Policy Institute (EPI). This innovative research into the impact of COVID-19 on pupil attainment in England uses Renaissance Star Assessment data and information from the National Pupil Database to provide a robust estimate of the learning loss experienced by pupils in years 3-9 over this unique year.

We know that many Renaissance users have already taken actions to assess the impact of this disruption on their pupils. This latest release offers an insight into the national picture and provides a valuable benchmark with which to compare your pupils, with breakdowns provided by age, socio-economic status, and by region. So, if you are looking to get an idea of where your cohort sits, where should you begin? This blog aims to provide you with the practical steps needed to gauge the impact on your pupils and benchmark against the findings of this research.

The power of the Scaled Score

Much of the focus around these reports has spoken of the months of learning loss experienced by pupils, but such estimates are all based on the Renaissance Star Scaled Score. Many Star users will be comfortable with one or more of the scores provided after an assessment, whether that’s the Reading Age, NRSS, PR or SGP, but in many ways the Scaled Score is the unsung hero that ties them all together. From this raw score, both our criterion-referenced and normed-referenced scores are developed offering relative and actual attainment information for teachers. This single scale across primary and secondary is also linked to the national curriculum through Renaissance’s learning progression, showing teachers where a pupil is working and what they are ready to learn next (more on that to come!).

Now, this Scaled Score can also be used to compare your Star Reading and Maths results with the findings from the latest reports, giving you a vital indication on how pupils have fared following the disruption to their learning since March 2020. Much like the Renaissance KS2 SATs linking study, this research provides a Scaled Score benchmark which you can apply to your Star results. Of course, no one can know the precise impact of the pandemic on children’s learning, and these estimates should not be viewed as finite as there is some uncertainty around the estimates of learning loss. However, they do provide a robust indication at a country-wide level of how pupil’s learning was impacted, and we hope they can give some reassurance and provide a new perspective to your assessments.

What does this research tell us?

The graphs below show the mean Scaled Scores in reading and maths in the first half of the autumn term in 2019/20 and 2020/21. These are presented for year groups 3, 6 and 9 in reading and 3 and 6 for maths, but the intervening years are also available in the Annex tables of Report 2, and at the bottom of this blog. As the Star attainment data was matched with pupil records in the National Pupil Database, you will also see breakdowns by several key characteristics and ethnicities, although sample sizes are smaller for these breakdowns so do interpret the results with some caution.

You will notice variations in the proportion of the change between the first bar to the second bar for each year group, as well as in the starting points from the pre-pandemic baseline. As is the pattern seen in national outcomes, historically pupils taking the Star Assessments perform better if they are female for reading or male for maths, worse if they are FSM Ever 6 or SEND, and EAL pupils who have recently arrived in the state school system in the last two years are at a disadvantage to other EAL pupils. The similarity in the pattern of these outcomes compared with those of national assessments, increases the confidence that the findings from this research on learning loss provide an accurate picture of the impact of time out of the classroom.

As well as these mean Scaled Scores by year group and characteristic, EPI also calculated the average difference in Scaled Score points by sub-group. These average differences provide a rough measure by which you can compare changes in your own results year on year, and these are available in the Annex tables A8, A10 and A11.

Overall, EPI found that for primary aged pupils in years 3 and 6, the average results in reading were between 10 and 30 Scaled Score points lower in the first half of the autumn term in 2020/21 across all characteristic groups when compared with results in 2019/20. For year 9 pupils in secondary schools, the average results in reading in 2020/21 were broadly the same as in 2019/20.

In mathematics, results in 2020/21 were substantially lower than in 2019/20 across all characteristic groups – apart from EAL recent arrival pupils. On average, year 3 pupils scored around 35 Scaled Score points lower in 2020/21 than in 2019/20, year 6 pupils scored around 20 points lower than in 2019/20.

How to find the Scaled Score for your pupils

The estimates provided in this research are based on very large sample sizes and this means there may have more variation if you were to look at an individual pupil or small group. Looking at these averages across a year group will give you the best comparison and you can find the average Scaled Score for a year group by using the Star Reading and/or Star Maths Summary Report. Currently, the average Scaled Scores are provided for Autumn 1, so you will want to view the report for this period only by narrowing down the dates in the Reporting Period. Then choose the option to Group By year group. You could also select yes to Summary Only to save generating all pupil results.

As with all Renaissance reports, you can easily filter for a specific demographic using a Reporting Parameter Group. You will first need to ensure that characteristics of interest are assigned to pupils, either by assigning them manually to a single pupil or by adding them to multiple pupils through an import or through Users > Edit multiple pupils. From there you can build the reporting parameter groups required.

If your pupils used Star Assessments in the previous academic year and you would like to benchmark them against the national average from 2019/20 or view the change between these two periods, you can use the Star Growth Report, or change the dates of the Summary Report to the corresponding period in the previous year. Alternatively, you can go to School Years > Work in a Different School Year and select 2019-2020 and then follow the above instructions to pull the report.

What next?

Now you have a sense of where your pupils are compared to the national picture, you may well be asking how you can best accelerate their learning and hasten the recovery from the time lost to learning. As mentioned above, Renaissance Star Assessments are built around an empirically validated learning progression which was created in collaboration with the NFER. This not only pinpoints where a pupil is on the national curriculum based on their Scaled Score for reading and maths, but even highlight the Focus Skills™ that are essential to a pupil’s progression.

A personalised report is available via the Star Reading and Maths instructional planning reports, for both individual pupils, as well as groups of pupils, and Renaissance have also released free teacher workbooks of the Focus Skills to give everyone the opportunity to prioritise the most important skills in the curriculum. It should be noted that it is not that other skills do not have value, but rather these Focus Skills are essential pre-requisites that open up the curriculum to pupils and help to accelerate their progression. A full blog to help explain this can be found here.

Want to know more?

We have provided responses to some questions below, but if you want to know more about any of the information discussed in this blog, please don’t hesitate to reach out! We will also be holding a panel discussion to discuss this research further on Thursday 1st July at 12.30pm. We hope to see you there.

Register

 


FAQ

Why am I not able to calculate the learning loss in months for my pupils?

Unfortunately, it is not possible to replicate the process of finding the learning loss in months for your students as the models used would not be accurate at the smaller scale of a single class or year group from one school.

A breakdown of the methodology used to create the estimates of learning loss in months is provided in the first Interim Report which was released in February. This methodology consisted of taking millions of historic data points from the 2017/18 academic year onwards to plot attainment trends across different year groups and demographics. This prior attainment information was used to create estimates of the expected outcomes for the 2020/21 academic year if there had been no disruption to learning.

This was all done using the Renaissance Star Scaled Score as discussed above, and regression models were created to allow for different rates of progress. The below equation was then used to create the learning loss in months.

What about the Reading Age and months progress seen in this score?

As above, the months learning loss discussed in these reports is specific to this research and should not be confused or compared with months change which might be seen via the Renaissance Star Reading Age, as this is a very different measure.

The Renaissance Reading Ages provided following a Star Reading assessment are a criterion-referenced score representing a bracket of Scaled Score points, and as such, they are not the most robust means of measuring student progress. Reading Age does not take into account the pupil’s age and a pupil could grow by several scaled score points but not change in terms of Reading Age. If you are looking to measure granular progress, the Scaled Score is the most sensitive score, whilst the normed-referenced scores such as the NRSS and PR will give a stronger indication of whether a student is on track based on their age and year group. The SGP will also compare students with their national peers, and let you know whether the progress they are making is not only good for their age, but also for their ability. If you would like to know more about Star Assessments there is information here or reach out to our team.

What about looking ahead?

Not only can the average Scaled Scores provided in this research be used to benchmark your students against the national picture in this unique year, it also affords the opportunity to compare where your year groups are now ahead of next autumn. If, for example, you are assessing Year 4 this term you could compare their average Scaled Score with that of Year 5 in Autumn of either 19/20 or 20/21 to see if they are on track.

As star assessments are not used by all students, is this study representative of the wider student population?

Yes. Matching with the National Pupil Database allowed EPI to examine the demographic breakdown of pupils who sit Star Assessments, and found that overall, the characteristics of Renaissance pupils corresponded to pupils nationally. The proportions of Renaissance pupils who were boys, who were eligible for free school meals, who had English as an additional language, or who had an identified special educational need or disability, were very close to the proportion of pupils nationally that had each characteristic. This was true amongst both primary and secondary aged pupils.

Renaissance pupils were slightly more likely to be from white backgrounds and had a range of prior attainment in national curriculum assessments at Key Stage 1 and Key Stage 2. They were slightly less likely than average to be assessed as being below the expected standard at Key Stage 1 and slightly less likely to be either below or above the expected standard at Key Stage 2.

There is more detail provided in Report 2, and such analysis increases the confidence that the patterns of results we see in this data are likely to reflect outcomes for the pupil population as a whole.

Can I see if my region was more affected than others?

Yes. The average loss by Scaled Score was broken down into regions for Autumn 1 and Autumn 2 and this information is provided in Report 2 in figures 3.3, 3.4, 4.5 and 4.6, or in annex tables A8, A10 and A11. Again, smaller sample sizes do play a role here, so please do interpret with caution.


Data Tables
Table A1: Mean scaled scores in reading in the first half of the autumn term 2019/20 – 2020/21 for all year groups by characteristics for figure 2.1
2019/20 2020/21
Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9 Year 3 Year 4 Year 5 Year 6 Year 7 Year 8 Year 9
All pupils 304 379 472 567 636 720 771 285 363 458 552 636 715 774
Male 297 370 463 558 624 706 757 279 356 451 545 626 705 762
Female 311 388 481 576 647 734 786 290 370 464 559 646 726 786
Non-FSM Ever 6 312 391 488 589 659 746 800 293 376 474 573 659 740 802
FSM Ever 6 270 339 427 514 583 653 706 250 321 411 498 564 645 704
EAL –

recent

arrival

278 318 418 493 568 619 678 280 311 394 464 674 619 619
EAL – other 293 369 465 562 628 699 742 275 353 450 547 600 694 762
No

identified

SEND

311 392 488 587 658 741 798 291 377 474 572 658 736 803
Identified SEND 226 282 363 445 506 581 622 215 263 355 434 507 584 623
Any other ethnic group 279 349 440 542 611 703 760 264 339 430 526 532 692 763
Asian and British Asian 299 379 473 575 640 713 764 285 363 459 558 624 710 767
Black and Black

British

309 379 463 569 633 705 769 292 365 459 553 629 711 772
Chinese 346 416 532 632 739 808 831 335 422 530 634 575 793 893
Mixed 315 392 484 578 653 731 787 300 376 477 567 625 726 793
White 304 378 471 565 634 721 771 283 362 456 550 639 716 773

 

Table A2: Mean scaled scores in mathematics in the first half of the autumn term 2019/20 – 2020/21 for all year groups by characteristics for figure 2.2
2019/20 2020/21
Year 3 Year 4 Year 5 Year 6 Year 7 Year 3 Year 4 Year 5 Year 6 Year 7
All pupils 485 552 626 688 720 451 530 602 667 704
Male 486 557 630 693 705 456 536 609 671 708
Female 483 547 623 684 734 446 524 596 662 701
Non-FSM Ever 6 491 558 635 699 736 457 538 611 679 714
FSM Ever 6 451 528 597 660 673 426 499 572 634 678
EAL – recent arrival 418 524 604 629 601 421 504 551 665 562
EAL – other 483 567 643 702 725 455 533 615 683 677
No identified SEND 491 561 637 701 741 456 539 611 679 727
Identified SEND 420 481 547 607 608 398 462 538 594 609
Any other ethnic group 495 554 651 698 643 439 528 585 680 642
Asian and British Asian 478 580 649 713 699 470 535 617 681 678
Black and Black British 474 543 625 698 797 450 523 600 661 632
Chinese 539 614 681 755 N/A 511 593 713 775 742
Mixed 513 548 626 683 740 458 550 614 674 733
White 483 548 622 684 719 449 528 600 664 717

 

Table A8: Estimated learning loss, in scaled score points, and pupil numbers by sub-group for figures 3.3 and 3.4
Reading Mathematics
Primary Secondary Primary
Mean scaled score Count Mean scaled score Count Mean scaled score Count
Male -15.8 112,979 -14.1 53,820 -23.8 7,162
Female -17.8 115,376 -12.1 53,454 -26.0 7,389
non-EVER 6 FSM -16.4 170,410 -12.3 78,914 -24.2 11,136
EVER 6 FSM -18.3 57,945 -15.1 28,360 -27.3 3,415
Any other ethnic group -14.6 3,716 -22.6 1,764 -26.6* 234
Asian -17.3 23,484 -14.9 9,836 -28.2 1,497
Black -16.5 9,792 -11.7 4,909 -23.0 603
Chinese -2.3 873 -6.5* 331 N/A N/A
Mixed -15.5 12,500 -9.2 6,018 -23.7 720
White -16.9 176,138 -12.8 82,250 -24.6 11,347
EAL – other -17.4 41,594 -21.0 14,593 -24.2 2,590
non-SEN -17.1 198,987 -13.1 92,101 -25.4 12,780
SEN -14.8 29,368 -13.0 15,173 -21.7 1,771
East Midlands -14.8 18,043 -17.1 11,971 -18.6 983
East of England -17.9 31,871 -11.1 12,797 -27.4 1,330
London -15.0 20,645 -14.4 10,385 -25.1 2,053
North East -21.7 16,704 -15.6 4,978 -31.8* 327
North West -18.3 26,125 -9.4 13,475 -23.4 1,340
South East -16.6 40,145 -9.4 21,526 -24.3 4,235
South West -12.7 33,839 -14.2 11,779 -13.2 1,913
West Midlands -15.1 25,700 -13.0 9,837 -27.4 1,162
Yorkshire and the Humber -24.8 15,283 -19.2 10,526 -45.3 1,208

Note: Asterisks indicate sub-groups where the sample is less than 500 pupils and as a result some caution should be taken with interpreting the estimate.

Table A10: Estimated learning loss, in scaled score points, and pupil numbers for reading by sub-group for figures 4.5
 

 

 

Primary
Autumn 1 Autumn 2 Count
Scaled score learning loss
Male -15.1 -11.2 55,504
Female -17.8 -10.7 57,467
non-EVER 6 FSM -16.4 -10.4 84,938
EVER 6 FSM -16.8 -12.6 28,033
Any other ethnic group -9.2 -8.9 1,737
Asian -15.3 -12.1 11,502
Black -14.7 -13.4 4,448
Chinese -3.6* -13.5* 454
Mixed -15.6 -12.5 6,032
White -16.9 -10.6 87,948
EAL – other -16.1 -12.1 20,328
non-SEN -17.5 -11.7 99,380
SEN -9.1 -5.7 13,591
East Midlands -12.8 -12.8 8,638
East of England -17.3 -8.5 14,390
London -12.2 -6.1 8,177
North East -20.6 -17.8 9,060
North West -18.1 -17.0 12,788
South East -16.4 -10.4 21,570
South West -13.5 -6.8 16,778
West Midlands -15.0 -8.0 12,740
Yorkshire and the Humber -24.3 -15.1 8,830

Note: Asterisks indicate sub-groups where the sample is less than 500 pupils and as a result some caution should be taken with interpreting the estimate.

Table A11: Estimated learning loss, in scaled score points, and pupil numbers for mathematics by sub-group for figure 4.6
Primary
Autumn 1 Autumn 2 Count
Scaled score learning loss
Male -22.4 -15.7 4,294
Female -25.0 -18.7 4,576
non-EVER 6 FSM -23.0 -16.0 6,748
EVER 6 FSM -26.2 -21.2 2,122
Any other ethnic group -21.6* -14.0* 139
Asian -31.1 -21.1 902
Black -17.1* -14.4* 362
Mixed -20.7* -11.9* 422
White -23.4 -17.1 6,951
EAL – other -25.5 -18.2 1,541
non-SEN -24.4 -18.2 7,879
SEN -18.6 -10.0 991
East Midlands -27.0* -19.6* 480
East of England -29.4 -23.2 708
London -17.0 -8.0 948
North East -31.7* -21.7* 270
North West -22.1 -11.3 944
South East -21.1 -16.9 2,488
South West -12.3 -3.0 1,500
West Midlands -28.2 -25.4 692
Yorkshire and the Humber -48.7 -46.5 840

Note: Asterisks indicate sub-groups where the sample is less than 500 pupils and as a result some caution should be taken with interpreting the estimate.

 

Share this post