Drawing on Process Tracing and Bayesian Updating to evaluate the impact of complex policies

Commissioners of policy evaluations are often interested in assessing the impact of the interventions they fund, often financed by tax payers. Impact evaluation questions can measure the extent of an intervention’s contribution, or attempt at understanding how and why the intervention had a certain impact. They can focus on the average, population-wide assessment, or on local effects at various scales, under various contextual conditions.

In the last 10-15 years there has been a strong push for the use of counterfactual analysis and in particular Randomised Controlled Trials to assess the impact of such policies. Unfortunately, however, in order to conduct experiments or even quasi-experiments a series of conditions are required which are often not met in the day to day reality of policy making. In addition, counterfactual analysis as traditionally applied is unable to answer questions about how and why interventions worked or didn’t, when and under what circumstances they did, and on what grounds they can be assumed to work in the future or in other contexts.

In response to this state of affairs, commissioners are increasingly requiring that evaluators formulate and test more flexible “contribution claims” about the role an intervention has played in the achievement of one or more outcomes. They don’t necessarily aspire to isolate the intervention from other causal factors and measure its net effect; they are trying to gain in-depth, qualitative understanding of how the intervention has drawn on other factors and adapted, reshaped into the causal package that appears responsible for the outcome. Various forms of theory-based evaluation and various formulations of “theories of change” are being used for this purpose, each with their pros and cons. What current guidance presently lacks are detailed indications on how to collect empirical data to test, refine, reject or confirm those hypotheses on causal mechanisms, eventually supporting some kind of contribution claim.

One analytical framework that has been recently formalised in the social sciences (Process Tracing) appears very promising in filling out this gap. It comes with 4 (now famous) tests to assess the probative value of a given piece of evidence for the demonstration of a causal claim: the Hoop test, the Smoking Gun, the Doubly-Decisive, and the Straw-in-the-Wind. More or less explicitly drawn on in all sorts of human thinking and activities (court law trials, medical diagnosis, crime investigation), these tests are compatible with an informal Bayesian logic of confidence updating, which can also be formalised and explicitly refer known properties of hypothesis tests like sensitivity, specificity, Type I and Type II errors. A series of measures of probative value have been proposed in the law literature, one of which is the likelihood ratio. As these tools are now being increasingly and explicitly applied to historical studies, political science and social science, they seem to hold considerable promise for policy evaluation.

This seminar will illustrate the speaker’s experience with using these concepts and frameworks in the evaluation of publicly funded policies and programmes: namely, the testing of causal claims about the contribution an intervention has made to the achievement of an outcome. It has been argued that a shift is at stake from “assessing impact” to “assessing confidence about impact”. The case material presented primarily concerns evaluations of policy influence; however, since the scope of application of the basic intuitions about belief and confidence updating spans already so many scientific and professional fields, they seem to hold promise for policy evaluation across a broad range of sectors and types of intervention.

Date: 
Wednesday, 29 June, 2016 -
12:30 to 14:00
Presenter(s): 
Dr Barbara Befani
Presenter(s) biography: 

Barbara Befani specialises in innovative methods for impact evaluation. She is a member of CRESS and works within the Centre for the Evaluation of Complexity Across the Nexus (CECAN).

Location: 
26AD03