RESEARCH
Teaching is a craft. Every practitioner will develop a repertoire of skills, which reflect unique practices, mannerisms, styles, quirks, and the unique stamp of their personality. Nevertheless, teachers can draw upon information garnered through traditional scientific means. Educational research has provided a lot of information about how teachers have negotiated inherent problems in their work. It has informed us about coherent relationships between teacher instructional actions and student outcomes. More directly, the body of data from the field of teacher effectiveness can inform us about the form and direction such relationships are likely to take. However, for several sound reasons, it is vital not to overplay such knowledge. Teacher effectiveness research
underwent a major boost in the period known as the ‘Sputnik era’. Over two decades, from the 1960s to the 1980s, the Cold War period was associated with considerable expenditure in educational programmes, curriculum development and scientific training (Bracey, 2007). In the United States, the Office of
Education was able to direct millions of taxpayer dollars toward classroom- based research. This period saw the emergence of cleverly controlled research designs using low-inference observation schemes to monitor what was taking place in classrooms – they aimed to identify correlations
REFERENCES
• Berliner, D., C. (2004). Describing the behavior and documenting the accomplishments of expert teacher. Bulletin of Science, Technology and Society, 24 (3), 200-212.
• Berliner, D., C. (2005). The near impossibility of testing for teacher quality. Journal of Teacher Education, 56 (3), 205-213.
• Bracey, G., W. (2007). The Sputnik Effect: Why it endures, 50 years later. 18 ISSUE 34 • WINTER 2018 inTUITION Retrieved from Education Week
goo.gl/TSHj22
• Brophy, J. & Good, T., L. (1986) Teacher behaviour and student achievement. In M.C. Wittrock, (Ed.), Handbook of research on Teaching (3rd Edition, pp. 328-375). New York: Macmillan.
• Hattie, J. & Yates, G., C., R. (2013). Visible learning and the science of how we learn. London: Routledge.
RESEARCH
SE ARCHARCH
Dr Brendan Bentley is director of partnerships and engagement and director of the Master of Teaching programme, School of Education, University of Adelaide. Dr Gregory Yates is an adjunct senior lecturer in the School of Education at the University of South Australia, Magill Campus.
‘Teaching is a craft where you can draw on research for your methods’
In the fourth in our series discussing the question, ‘Teaching: art, craft or science?’, Dr Brendan Bentley and Dr Gregory Yates argue that teachers develop their own skills but can utilise tried and tested principles.
between teacher instructional acts and student achievement. That’s why this period was noted for the research model known as the process-product design. It entailed employing trained observers to map teachers’ instructional and managerial actions over time (for example, six months). Massive amounts of data were accumulated. This mammoth effort, which lasted
into the early 1980s, identified effective instructional practices (rather than effective teachers, as such) reasonably successfully. It was possible to track statistical relationships between two sets of variables: what teachers did, and how their students scored on achievement tests. Although there are scores of resources citing, and drawing on, the process-product findings, significant statements are found in the third Handbook of Research on Teaching, by Brophy and Good (1986) and Rosenshine and Stevens (1986). The findings are profound, but as such
they still entail careful interpretation. For instance, it was found that teachers who wait a few seconds after a student responds to a question (allowing them time to elaborate) tend to have students scoring well on tests. But caveats need to be acknowledged. Many superb teachers do not display long wait times. And simply extending your wait time habits will not make you a great teacher.
Efforts to translate such statistical
level findings into tools such as checklists, or other casual teacher observation tools, were doomed to failure. The data told us that a template for designing an effective teacher was a myth. The very notion of entering a teacher’s classroom and conducting a few minutes’ observation (on high- inference tools such as eyeballing) was silly and irresponsible if this entailed professional assessment. This is what data told us at the time, an aspect well recognised by all researchers in this field by the mid-1980s. Indeed, one of the early process-product researchers was David Berliner, who wrote a review paper entitled The near impossibility of testing for teacher quality (Berliner, 2005). The notion that great teachers cannot be held to a template was then reinforced by the next generation of teacher effectiveness research. This was the study of the most highly experienced teachers, our experts. Studies into expert teachers mirrored the major findings about expertise in other areas (Berliner, 2004; Hattie & Yates, 2013). The very same set of traits was found. However, one curious finding from this body of research is that it was not possible to use classic statistical theory (with its assumptions of sampling etc) when analysing expertise. The trouble is that experts never constitute a representative sample from some
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40