critical realism in education research - can storyboarding help?
Cloud created by:
12 April 2012
The Career-learning Café
The first prime-ministerial call for an education debate was Jim Callaghan’s - in 1976. The debate goes on. Ministers assert their dominance, apparatchiks cast themselves as guardians, academics say it’s more complicated, professionals worry it won’t work, and the media try to hold people in a state of doubt and fear. Any claim is countered with another correlation, opposed by another anecdote, or outflanked by another outlier. And global commerce waits in the wings.
A good many of the underlying questions are variations on...
> what’s going on in education?
> what can be done about it?
And a good many people have an answer to both. But they don’t agree on what they know. Jim Callaghan’s great debate is sinking into a quagmire.
So the questions keep cropping up. As they do in three-scene storyboarding. It’s a narrative process for finding meaning and seeking purpose in people’s lives. Finding meaning probes what’s going on. Seeking purpose figures out what to do about it. The underlying questions are...
> what’s going on in your life?
> what can you do about it?
The answers that storyboarding gets won’t resolve the great debate. But they help to explain why nobody else can.
There’s nothing new about questions rooted in ‘what’s going on?’. Erving Goffman posed it, and Marvin Gaye got a song out of it. These are questions close to the heart of the human condition. Our ability to work with them is our fingerhold on survival - in the savannah, on the street and doing research.
So could working with street-level questions qualify storyboarding as research? That would be a sillier question if ‘proper’ research did not, itself, have methodological issues. They include concerns for both how researchers frame enquiry, and how respondents shape what they say. People can be influenced by the expectations set up by dominant interests. They may want to please. Or to look good. They may even string professionals along. And it’s not certain that professionals will notice.
All the apparatus of research - framing, gathering, finding, questioning, contextualising, reporting and recommending - must themselves be open to enquiry. We need to look deeper than what is easily or conveniently found. Nowhere more-so than in education.
This is where Sue Clegg (2005) joins the debate. She points to failures in educational research and claims that ‘critical realism’ can help fix things. I’m taking ‘critical’ to mean that we really need to know what’s going on. And ‘realism’ to mean that findings should be recognisable to the people whose lives they are supposed to improve.
It would make evidence less than a single determinant of findings. It becomes just one of the tools for probing what might be going on. The claim inserts into the methodology an informed understanding of what that could be. For informed understanding Sue Clegg appeals to education professionals. But Ann Oakley (2000) wants to include the understanding of people whose lives will be affected. Where there is education research which has no such connectedness, Sue Clegg and Ann Oakley insert experience into the methodological mix.
This is not to propose a hypothesis to be retained or annulled, it is to modify how research is conducted. It can know that absence of evidence is not evidence of absence, but it can also wonder whether that is a surprise worth scrutinising. It would look again at the methodology - perhaps to find knee-jerk, contrived, detached or convenient responses.
The case is that, before anybody gets to making any recommendations, somebody needs to visualise the lives they are to serve. It means knowing something about the locations in which those lives are situated, how things are in those settings, and what each of those populations are in any position to recognise as real.
But locations, settings and populations are plurals. There can be no same-for-everybody answer, leading to a general entitlement to a structure or programme. Indeed the findings of a single study can suggest contradictory recommendations. What is recognisable in some lives is rejected by others. What serves some people's wants denies others people's needs.
Policy, the media and commerce look for clear and immediate answers. Critical realism says they can’t have them.
Asking ‘what’s going on?’ and ‘what can we do about it?’ can find evidence of impact - the difference made by those ‘goings-on’ and that ‘do-about-it’. Impact research is critical for climate change, energy generation, cloning, stem-cell technology and GM foods. Audiences in government, commerce and society need answers. And they need them quantified, so that what is measured-or-counted justifies action.
The impact of education is no less critical. Indeed, it is essential to all those critical issues. But the impact of curriculum is more locally variable - from street to street, and from one end of the village to the other. Curriculum effectiveness is situated - a post-coded phenomenon. And the ‘where?’ of location, the ‘how?’ of setting and the ‘who?’ of position are better understood through narratives than by numbers.
It is why Sue Clegg and Ann Oakley look to ethnography. It is a situated narrative form. Paul Willis (1977) and Howard Williamson (2004) set out narrative sequences located in background, shaped by attachments and moved on by encounter. Phil Hodkinson (1996) shows career to be punctuated by unpredictably managed turning points. It is possible to count frequencies in such narratives; but they mostly rely on what people show-and-tell.
A critically scrutinised methodology allows that a possible finishing point need not correspond with a documented starting point. Findings need not predict futures. It is necessary to look outside the findings in order to know what kinds of turning point would make anything else possible. But where in the methodology should that search come? Even in ethnography it figures late, and sometimes not at all. It comes late and pessimistic from Paul Willis. Howard Williamson allows findings to speak for themselves. Phil Hodkinson pointedly refrains from linking findings to recommendations.
Ethnographies give voice to students attributions, they are situated, and they vary locality-to-locality. They uncover complexity, ambiguity and inconsistency. That may be why ethnographers hesitate to apply findings. And it is not what audiences for research want to hear. But professional educators, in conversation with their students, can do better. Ethnographies express...
> prevalent positions on beliefs, values and expectations
> minority positions on this
> experiences that shape those positions
> the sense that people make of those experiences
> how and why that experience connects people to a programme
> how and why it does not
Where students are prepared to share this they are inviting an educationist into a conversation. A storyboard shows what's going on in images, and tells it in dialogue. It is a tool for helping students.
But, in acceptable conditions, it might also provide data for wider use. There may be no measurements. But - put together - a locally-collated collection of storyboards can identify comparisons, find similarities and differences, and count frequencies. And all of those data can be critically scrutinised.
Storyboarding does not compare to ethnography in the depth and breadth of its data. But neither does it deal in what can be easily and conveniently found. And there are enough parallels with ethnography for it to serve as a cost-effective research tool.
For an educator that research would be programme evaluation. It stands for an unpretentious professionalism; one which understands who it is working with, and how well that working is working. It is connectedness between what educators do and what students learn. And there is no audience for education research who can afford to ignore this - even if that entails taking an interest in the well-being of other people’s families.
Speaking of interests, and of acceptable conditions, none of this can be done without permission and without being anonymised. It is an ethical issue. One of many. They concern whose interests are being represented in the research. Ethics lifts professionals above arbitrary expectations.
The debate is peppered with such phrases as ‘transparency’, ‘impartiality’ and ‘independence’. And those words are needed to scrutinise a good many interests in education - some of them quite arbitrary. All of the data on comprehensives, free schools, academies, curriculum, vocationalism are differently constructed by differently interested groups.
The advice on tracking such interests given to Watergate investigators Woodward and Bernstein is said to have been...
‘follow the money’
On that basis it is not difficult to document cases where findings are shaped to ready-made conclusions. Sponsorship is a clue. Its overall research agenda is another. It may resort to findings that depend on large-sample probabilities to claim significance. The surreptitious use of median percentages of tiny percentages camouflages miniscule results. Doubtful estimates of impact distracts attention from success criteria. Client-compliant consultants re-shape recommendations to suit dominant interests. Even professional interests may feature when research concludes that what people need is what the profession provides.
But some bias is inadvertent. Findings that are framed entirely by labour-economics cannot be impartial: people act for other-than-economic reasons. And diagnostic methods designed only by occupational psychologists cannot be independent: matching procedures do not work where socially-situated attachments are significant. This is not an intention to mislead. But it is to conduct research inside critically un-scrutinised boundaries (Max Bazerman and Ann Tenbrunsel, 2011).
The base-line ethical principle is that the enquiry is a response to user voice. Voice is futile if nobody listens. It is a critical consideration in situated education - which starts from where people are, but does not finish there. Among the ethical principles arising from this...
1. before parading findings we need to know locations, settings, positions
2. findings must be related to the interest of the people whose lives they are supposed to improve
3. connectedness must be understood in terms what might be going on and what might come out of it
4. all of this needs to be declaring to the people who are expected to act on it.
Teachers cannot be expected to agree with everything that their students believe, value and expect. If the words ‘transparency’, ‘impartiality’ and ‘independence’ have any meaning they express mutual connectedness rather than reciprocated compliance. Attending to voice is better than getting down with the kids.
And if there is a universal entitlement to education it is not to be found in any particular structure or programme - it is the right to be heard.
Max Bazerman and Ann Tenbrunsel (2011). Blind Spots - Why We Fail to Do What’s Right. Oxford: Princeton University Press
Sue Clegg (2005). 'Evidence-based practice in educational research - a critical-realist critique of systematic review'. British Journal of Sociology of Education, 26 (3)
Phil Hodkinson, Andrew Sparkes and Heather Hodkinson (1996). Triumphs and Tears – Young People, Markets and the Transition from School to Work. London: David Fulton
Ann Oakley (2000). Experiments in Knowing: Gender and Method in the Social Sciences. Cambridge, Polity Press
Howard Williamson (2004). The Milltown Boys Revisited. Oxford: Berg
Paul Willis (1977). Learning to Labour - How Working Class Kids Get Working Class Jobs. Farnborough: Saxon House, 1977 - track an abstract of one of ‘the lads’