A number of researchers in the International Centre use a realist approach to evaluation. Julie Harris and Lucie Shuker outline some of the key aspects of this approach for those who are new to it.
This week staff from the International Centre presented three papers at the 2nd International Conference on Realist Evaluation and Synthesis. Between us we now have three completed studies that use the approach: the Safe Accommodation evaluation, a study of co-determined outcomes with young people leaving care, and The Alexi Project. If you’re new to the term ‘realist evaluation’ it might sound a bit alien. So here are some pass notes that will help you in that next conversation with a co-worker about evaluative methods…
1) We ask questions about why and how things change
Realist evaluation is not a methodology; it’s more like a set of beliefs about evaluation and the types of question we should be asking. Lots of evaluation is directed toward answering the question ‘Does this project or intervention work?’ While this is important, realist evaluators also want to know what caused the positive (or negative) change. In other words ‘What is it, specifically, about this intervention that has this effect?’ Without answering this question, evaluators could gather data on a young person’s self-esteem ‘before’ and ‘after’ a mentoring programme, for example, and then say that any improvements were caused by the mentoring. But of course, the improvement in self-esteem could actually be because the young person joined a football team at the same time, had been consistently praised for their effort on the pitch, and was feeling increasingly good about themselves as a result.
2) It all starts and ends with theories
One of the ways we can get around the problem of trying to work out if it really was the mentoring program that improved self-esteem, is by thinking long and hard about how the mentoring is meant to create positive change in a young person’s life. All interventions have some of kind of ‘programme theory’ – some ideas about how they are meant to cause change. But we’re not always brilliant at articulating these. People are increasingly using ‘theory of change’ or ‘logic model’ approaches, which are tools that help us communicate these programme theories. Realist evaluation goes one step further. It develops theories at the start by reviewing all the relevant literature about a programme design and seeking ‘practitioner wisdom’ about how things are expected to work. In this example, perhaps the mentor acts as a role model from whom the young person picks up certain aspirations. It then tests those theories through collecting and analysing data, and refines them as a result of what has been learned. It might be, in this scenario, the data suggests that it’s actually the voluntary nature of the programme, rather than the mentor being a role model that makes the difference. It’s the fact that a successful adult is choosing to spend time with the young person that makes them start to believe they are worth spending time with – which has a positive impact on their self-esteem as a result.
3) Interventions don’t ‘work’ – people respond to them in different ways
This brings us to one of the most helpful contributions of realist evaluation – its focus on how people respond to services or projects. Have you ever been in an organisation that tried to introduce a new way of working? Maybe it was ‘the latest thing’, or claimed to be evidence based. Either way, everyone was now expected to learn this new approach or use this new tool. How did people respond? How did you respond? Some people may be enthusiastic adopters because they want to progress professionally in the organisation. Others may feel anxious about the amount of time it will take to learn and will therefore challenge its implementation. Good theories need to articulate a) what the new resource is that is being introduced and b) how it’s meant to influence people’s reasoning, thereby changing their behaviour. Perhaps the latest thing is a new risk assessment tool that incorporates a measure of a young person’s resilience (the new resource). The programme theory may be that the tool will remind practitioners of young people’s strengths, and this will help them to treat the young person as already having many of the tools needed to achieve positive outcomes (new reasoning). Realist evaluations develop and test theories that aim to be pretty specific about how interventions are meant to persuade people to change. They aim to get closer to understanding what’s in the ‘black box’, between what services do and the outcomes they achieve.
4) Context makes or breaks an intervention
Much scientific research tries to maintain laboratory conditions that are stable and predictable, so that if you test a new drug, for example, you can be confident that the results are down to the drug – not a sudden spike in temperature in the lab itself. The social world is a very different beast. Interventions are never introduced into neutral or even stable contexts. As we have said above, realist evaluations are trying to understand how people respond to a new intervention, but also why. There are an enormous number of different things going on at the individual, inter-personal, organisational, societal, economic and political levels that will affect the way people respond to a programme. Some of them are less relevant to us (you’re grumpy about the training course you have to go on because you haven’t had any breakfast today) but others are very relevant. Perhaps if you don’t go on the training course, you’ll have your benefits cut off – and so you feel very resistant to the trainers. They may be lovely and knowledgeable people, but if wider welfare policy is hugely unpopular then it might mean people generally don’t want to be there. So, a fundamental part of the programme theory is taking account of the contexts that might help people change their reasoning in particular ways, or might hinder that process.,
5) It’s likely to require a mixed methods approach
As with all research, a guiding principle of realist evaluation is that you should use the tools and methods that will help you answer your question: whether that’s large surveys, ethnography or interviews. Having said that, most realist evaluators use mixed methods, because they are trying to work out a) what has changed as a result of the intervention and b) how/why. The first question often (but not always) uses quantitative measurements that can summarise the outcomes achieved by the programme. And because we know people respond to programmes in different ways because of their different contexts, we therefore expect that the programme won’t have a positive impact for everyone. In other words, we would expect to see some patterns in our outcomes data – those who the programme helped, and those it didn’t. Once you’ve seen those patterns you can draw on qualitative data (often interviews) to try to work out why some people didn’t respond well. Perhaps the intervention didn’t seem to work as well for women as it did for men. By integrating your qualitative and quantitative data you can hopefully hypothesise about why that might be, and end up with an improved theory that either recommends adapting the programme, or perhaps targeting it just to men.
There are lots of reasons we are drawn to realist evaluation in the International Centre.
- It can give us more rigorous explanations of how and why services bring about positive change in young people’s lives, which helps policy and practice communities learn and develop.
- It can help us to build our knowledge and develop or accumulate learning across different programmes rather than just treating them as single interventions
- It recognises that children and young people are actively responding to interventions in their lives, for various reasons, and this can prevent us treating them as passive recipients of interventions.
- It also values the wisdom and experience of practitioners and participants, in understanding how and why change does or doesn’t happen.
Whether you are a producer, participant, commissioner or consumer of evaluations we hope this has provided some food for thought. Do get in touch if you’re interested in finding out more about realist evaluation or the way we are currently using it.
The Alexi Project is a a large-scale, longitudinal realist evaluation of 16 Hub and Spoke services across England. For more information visit www.alexiproject.org.uk.