I am currently thinking about Boyte’s (2006) book Everyday Politics, in which he advances a persuasive idea that politics in America has been professionalised creating a rift between politicians and think tank wonks on the one hand, and the citizenry on the other, a rift that needs to be redressed by reconnecting citizens with politics. I’m considering this idea in relation to contemporary (professionalised) academia, especially in the humanities, to try and understand what it is academics do, developing what type of knowledge relating to whose concerns and so on. The broader aim is to re-connect knowledge work in education with the public and the mediating institutions of the education system. This aim aligns with an observation made by Prof. Mike Apple (click this link to see Mike speak in Manchester on 4th July) that one reason that the forces of conservative modernisation – the neolibs and neocons – have been so effective is that they speak to the hopes, fears, concerns and dreams of the lay public (read an earlier blog post about this here). Basically if the Right speak to the public: what are those seeking greater social justice, doing?
This is why I’m interested in a recent exchange on the ESRI blog (read here) between Harry Torrance and Paul Driscoll. Prof. Harry Torrance has vast experience researching education and from this grounding he wrote a post responding to Ben Goldacre’s call for greater RCT’s (Randomised Control Trials) in educational research (read here). A call, as Harry pointed out, that ignored the extensive field of educational research that engages with these very issues.
Paul Driscoll wrote a series of comments responding to Harry’s post, and they continued corresponding by email. (Both have read this attempt to integrate the dialogue.) Paul is an academic molecular biologist with a PhD in physical sciences but also a soon-to-be chair of the governors of a local school. He describes himself as an ‘interested amateur’ in educational research, with a particular concern for how his school would measure the impact of the pupil premium,
“I was thinking how does one really measure the ‘impact’ of pupil premium funding (as we are required to do) – we don’t have the properly matched control condition where that funding was not applied. The same will be true in the future since (as of this week) OFSTED will require us to measure the ‘impact’ of strategies applied to assist the ‘more able students’. Sure, we can compare year groups before and after, but I fundamentally don’t like that because it seems to me that there are too many confounding variables – different students, different teaching staff etc…”
Harry’s comment on this is that it is unreasonable for OFSTED to expect individual schools to evaluate effectiveness in this way and there seems to be some confusion between monitoring expenditure for accountability purposes and undertaking a proper evaluation.
As part of the email exchange Harry suggested a possible quasi-experimental evaluation design for the effects of the pupil premium (PP),
“There is the question of whether or not we have any data from before PP was introduced, but we may be able to compare with similar schools without such funding elsewhere – a form of quasi-experimental design. Properly funded evaluations should be able to collect both qualitative and quantitative data from successive cohorts of schools and students as an innovation is introduced to different cohorts over time. Thus for example T1 might include a sample of schools receiving the intervention and a sample not. T2 would include a sample of those now with one term’s/year’s experience (T1 + 1), a sample where it’s new (T2), and those not yet included; etc. etc. over time. Proper matching may remain an issue but such panel studies are likely to be as good as it gets when real schools are trying to get on with their ‘day job’ of teaching pupils rather than helping with research studies. Overall, in such evaluations, we would try to understand not just whether something ‘works’, but how it works, why it works, how it is sustained (or not) over time. This is what I meant in the blog when talking about longitudinal qualitative data collection running alongside a trial.”
Both Paul and Harry found common ground on the difficulties of designing such evaluations, with Harry noting that in the busy context of school-based decision-making some evidence is better than no evidence, even if it’s not generated by an RCT. Similarly Paul wrote,
“And frankly the more I think about it the more I don’t care, since the pupil premium is a ‘political construct’ designed to make certain politicians look good in a system of declining resources. We will just try to the best for all of our students regardless of background.”
It is in regard to these broader issues and the admirable intention to do the ‘best for all of our students regardless of background’ that educational research performs a crucial and integral function. Harry wrote,
“it’s difficult even to try to agree on what these terms [pupil achievement and motivation] mean to different social actors and how we would know them when we see them, if we see them. How is morale or motivation manifested? Some might argue that this is irrelevant noise and that as long as test scores go up we know things are ‘working’. But then you get into debates about what we are trying to achieve through education – not just test scores surely – the grade inflation of the last 20 years must give us pause for concern there. The quality of the educational encounter is important, as is the pursuit of a wide range of outcomes. The evidence we have from educational research over many years tells us that it is the quality of teacher-pupil interaction, the vitality of what goes on in the classroom, that is important when trying to improve educational standards.
So, because it’s difficult doesn’t mean we shouldn’t try. But it does mean we have to be modest in our claims. The strength of qualitative approaches is that direct, face-to-face questioning and observation of teachers, pupils, parents etc. can give us some purchase on what the above terms mean and how they are operationalized. Thus we can build up a picture of, for example, not just how many teacher-parent discussions have taken place over a period of time (which may or may not have increased with the introduction of PP funding) but also what the content and quality of the discussion is like. Similarly there are well-established ways of collecting and analysing such data, it doesn’t have to be RCTs vs. anecdote. The library shelves groan under the weight of social science research methods texts.”
Paul, ever the scientist, is happy to state what he does and does not know,
“And please let me declare again my general ignorance of research in education. I am only that interested amateur mentioned at the start. I have no axe to grind. But I would welcome more and importantly accessible debate of these issues [the evidence base of public services and policy], which – it seems to me – should fundamentally underpin so much of our political discourse on public policy in all spheres.”
It is the ‘general ignorance’ of the ‘interested amateur’ that I’m interested in. The idea of re-connecting educational knowledge work with the public remains an idea to be developed, with blogs such as this being one such possible vehicle. What an ‘accessible debate of these issues’ would look like is an interesting question. Maybe ESRI could develop a pamphlet titled ‘the merits of qualitative research’ in five easy points – although I’m guessing not everyone would be happy with that.
I’d like to thank Paul Driscoll for contributing to this post. The idea was explicitly not to say that educational researchers (Harry) know more about educational research than people with other jobs or interests (Paul) but rather to ask what we as educational researchers need to do to connect thinking about education with those many and varied people governing schools, picking schools for children, or voting in elections.
James Duggan