‘Clinical trials’ approach to research to see what works best for students

Following in the footsteps of doctors and surgeons, hundreds of teachers have been seeing how we can better understand the biology of learning to improve educational approaches, says Dr Richard Churches.

There is a democratic deficit in education research. Often it’s dominated by people who no longer practise, or never practised, as teachers. Contrast this with our cousins in medicine and healthcare where serving doctors conduct much of the clinical research – helping to take treatments from ‘laboratory bench to bedside’. At the Education Development Trust, a global education charity, we have been exploring how to change this. As a result, hundreds of teachers have conducted ‘clinical’ trials paralleling those by doctors and surgeons. From working with Global Teacher Prize nominees to collaborations with neuroscientists and a Department for Education project exploring workload reduction, we have looked at how teachers can design randomised controlled trials (RCTs) combining results in big data analyses to help understand what works, with which learners and in what context. One project, funded by the Wellcome

Trust, explored a key challenge facing neuroscience and education – how to translate evidence from the laboratory into actual practice. From the mid- 19th century, similar challenges faced the medical profession as it moved to become a ‘natural science’ grounded in biology. Today, clinicians use biology in a similar fashion to the way that architects use physics. One day, something similar might be possible in education, with the biology of learning forming the basis for an ‘educational clinical reasoning’.

Several commentators have pointed

to the difficulty in translating evidence from neuroscience. Taking a moment to reflect on the different levels of research that neuroscientists explore illustrates the difficulty immediately (Figure 1). For what we know about the biology of learning to make sense in schools and colleges, education interventions need to be developed from these ideas and then tested carefully in real classrooms.

Figure 1. Different levels explored by neuroscientists

Behaviour Disease and disorder

Neurons and their connections Brain circuits

Genes and molecules

Of course, there are many things which can be instantly helpful, such as an understanding of memory and attention and what we know about teenage brains. This said, as things stand, not enough translational research has taken place. Thirty-one English schools were

involved in the Wellcome Trust project. They received an RCT design day and our books, Teacher-Led Research and Neuroscience for Teachers. Teachers were trained to analyse the results with EXCEL spreadsheets we built which calculate

research statistics easily, and they wrote up their findings as a conference poster. Most importantly, we combined the teachers’ results into a ‘meta-analysis’ in which the different effects across the studies could be compared. This was done using a ‘forest plot’ (Figure 2). As anticipated, repetitions of a similar approach had different effects in different contexts with different student groups, although the results were positive overall. Bear in mind that sample size matters, as does trial consistency.

HOW TO READ A FOREST PLOT Each dot represents the ‘effect size’. Error bars (either side) represent 95% ‘confidence intervals’ – estimating the range of results that might be expected in 95 out of 100 replications (repetitions of the study). The relative size of the dot is to do with the contribution of the individual finding to the combined analysis. Positive effects, right of the central

vertical line (> 0.00), show the treatment- improved learner outcomes compared to the control. Negative effects, left of the central vertical line (< 0.00), show that the control group performed better. The effect size used in the analysis is called r. Some of you may be more familiar with Cohen’s d (used by John Hattie), included on the right. Finally, you can see an indication [*] of the probability the result might be misleading (the ‘p-value’). For example, p < 0.05 means a smaller than 5 in 100 probability; p < 0.001 a less than 1 in 1,000 probability.




p l i

c a

b i li

t y

t o

e d

u c

a t


o n

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40