Why slow and steady wins in the race for better outcomes
There is always so much pressure on schools to improve outcomes, to implement…to have ‘activity’, writes guest blogger James Siddle. But taking time to ensure that activity is truly evidence-informed means everyone wins in the long run.
For four years I had been reading about reading; reading for pleasure; reading pedagogies; reading comprehension; and each step of that had developed into our annual improvement plan. The school-wide results in Key Stage tests were consistently above national, especially for disadvantaged children. However, one element about digging deeper into a subject is that you always get more questions than answers – and some of the most important answers regarding improving reading instruction still were not forthcoming.
Professor Steve Higgins once told me that interventions can be like a balloon, you squeeze them and they change shape, but then you let them go again… I knew some interventions, such as Abracadabra, had longitudinal evidence to suggest a lasting impact (if implemented successfully) but still, in lower Key Stage 2, were a group of children for whom phonics had been a challenge and whose reading fluency was below their peers. From this sticky point the prosody and the comprehension had failed to flourish. I buried into the data and evidence and, it seemed, I had missed the fluency trick. How could I resolve this?
It was no good knowing the evidence if I didn’t know the children well enough. Data, for me, is far from just figures on a spreadsheet. It’s a start point for teasing out the nuance of understanding. The broad trends led to a more detailed question-level analysis and a survey of the children – their answers were, as always, illuminating. Those children who were flourishing were developing certain approaches through repeated exposure to rich texts. They were predicting and summarising; they were re-reading tricky passages and assimilating new vocabulary; they were making links with their prior knowledge and their learning across the curriculum. For others, this joined up process was not happening. They did not re-read – it was too laborious a process; they did not linger on new vocabulary; they often did not infer meaning and were not enjoying reading.
Learning to walk
I then delved into the evidence -this is the part I have learnt (the hard way) not to rush. There is always so much pressure to change outcomes, to implement…to have ‘activity’; so, I lingered over the evidence for several weeks. In the EEF guidance report for Improving Key Stage 2 literacy, amongst the wealth of evidence and bullet-pointed tips, was where we found a potential answer:
Fluent reading supports comprehension because pupils’ cognitive resources are freed from focusing on word recognition and can be redirected towards comprehending the text.
This can be developed through:
guided oral reading instruction—teachers model fluent reading of a text, then pupils read the same text aloud with appropriate feedback; and
repeated reading—pupils reread a short and meaningful passage a set number of times or until they reach a suitable level of fluency.
I looked into the evidence and counterarguments about the impact and implementation of repeated reading (was it just, as Allington (2016) suggests, that repeated reading leads to gains in fluency and comprehension because primarily it increases reading volume, and this in itself accounts for the gains?).
A Theory of change
There were different ways of implementing these seemingly straightforward bullet points: we could work on whole-class practice (and teachers already used a form of ‘echo reading’) or we could use recordings for the children to listen to, for example.
The evidence review gave us shortcuts to the answers to some of these questions. The time I was taking, with my leadership team, could be interpreted as dragging our feet, but I was sure it was, in the long-term, going to get the answers we needed.
We were developing our theoretical model, but as Thompson and Wiliam (Thompson and Wiliam 2007) note:
No matter how good the intervention’s theory of action, no matter how well-designed its components, the design and implementation effort will be wasted if it doesn’t actually improve teachers’ practices—in all the diverse contexts in which they work, and with a high level of quality.
We developed a logic model and our primary research question:
What impact does the implementation of repeated reading, delivered by teaching assistants four times a week for 30 minutes over a term, have on the reading fluency and comprehension of identified disadvantaged children in Years 3 and 4?
‘Know thy Impact’
But how would we measure our impact? It would be easy to assume that an evidence-based approach might well have a positive impact, but would this be the case in our context and how we planned to deliver it? This has been one of the areas I have learnt most in my time of school leadership, as Hilliard noted: effective implementation is the single most important factor on the success of instruction (Mabie 2000).
I asked the counter-factual question and conducted a ‘pre-mortem’ by examining all the reasons it might go wrong. I considered basic compliance measures – the dosage for example (were they getting the intervention four times a week for 30 minutes?). I considered how we were measuring the outcome – was the data really going to tell us what we wanted to know or, as a broad comprehension test might be, were assessments too blunt an instrument? Were other factors influencing the outcomes? This last point was especially true when we considered a control group - we were a small school with a small sample.
Work I’d done previously with the Education Endowment Foundation and the Rand Corporation had also opened my eyes to the fact that the training is part of the intervention and needs measuring too. We had a pilot phase and then an ‘after action review’ before we upscaled the intervention.
As educators we often fail to account for our own education – this is no surprise in many ways with all the demands of the role. Throughout the various stages of improving outcomes, especially for our most disadvantaged pupils, I have learnt to be guided by evidence in concert with knowledge of the context and professional practice.
James Siddle is Headteacher of St Margaret's Church of England Primary School in Lincolnshire and a Director of KYRA Research School.