This is Part 2 of a series of posts outlining the research journey of my PhD so far.
In the previous post, I described the motivation for my PhD topic, which was to try to understand what outcomes have been delivered by biodiversity offsetting policies in order to inform future policy development.
A key criticism of biodiversity offsetting is that the measures implemented on-ground to compensate for impacts elsewhere often fail to deliver their intended environmental outcomes – or in fact, are never actually implemented as promised. Clearly, there’s a need to know what, if any, outcomes have been delivered by biodiversity offset policies, and ideally to understand some of the reasons as to why a particular result is being delivered. This is where policy evaluation comes in.
Now, I want to point out right now that there is a positively e n o r m o u s literature on policy evaluation, spanning many disciplines and schools of thought. One blog post is not going to capture all of that. So, the caveat I’ll point out right away is that this summary is far from comprehensive, and I’d recommend following up on some of the resources I suggest if you’re interested (and please leave further suggestions in the comments!).

First, I’ll clarify that in my research I’m interested in on ex post or retrospective evaluation, where the focus is on determining the merits of a policy that has already been implemented, whereas ex ante or prospective evaluation is an analysis of the merit a policy prior to implementation.
A general definition of evaluation is provided by Scriven (1991):
…the process of determining the merit, worth, or value of something, or the product of that process
Rossi et al (2004), one of the evaluation “bibles”, differentiates between summative and formative evaluation. Summative evaluation approaches examine the effects or outcomes of programs. For example, what was the impact or outcome of a policy intervention? Whereas formative evaluation approaches aim to strengthen or improve the program being evaluated, by answering questions such as Why did these outcomes occur? and How can outcomes be improved? Depending on the purpose of the evaluation, different approaches will be used to answer different questions.
The purpose of an evaluation may not simply be to determine what outcomes (intended and unintended) have been delivered by a policy, but could also want to examine the policy design, the process of implementation, or to attempt to establish the impact of the policy relative to what would have happened in its absence. The US EPA has a nice summary of the different forms of program evaluation here.
When I began my search for approaches I could use to answer my research question, I first looked to the ecology literature, given this is my background. Ecological research most often relies on quantitative evaluation approaches, and has been strongly influenced by medicine and public health in recent years (Pullin and Knight 2001). The drive for evidence-based conservation (Sutherland et al 2004) has led to the emergence of organisations such as Conservation Evidence and the Collaboration for Environmental Evidence, which provide access to systematic reviews and resources on how to conduct such a review. Long term experimental evaluations are also a key feature of ecological research (e.g Mortelliti et al 2014).
Economists are also big fans of quantitative approaches, though are more likely to use a quasi-experimental design which mimics an experiment. Often it’s not possible to evaluate a policy using experimental approaches, due to lack of resources, foresight or ethical concerns (for example, it would not be ethical to test the effectiveness of a cancer drug by randomly allocating patients to control and treatment groups). A quasi-experiment design will create an artificial control group with existing data, using a range of techniques such as propensity score matching, difference-in-difference, and instrumental variables. A key concern is establishing the counterfactual – that is, what would have happened in absence of the policy intervention? This means trying to eliminate all possible confounding variables. Nice examples are this paper by Greenstone (2004) which asked whether the US Clean Air Act actually did lead to the massive observed reduction in sulphur dioxide emissions (spoiler alert: probably not), and Bottrill et al (2011) which found that recovery plans didn’t actually lead to threatened species recovery in Australia.
Non-experimental approaches can be used to estimate the effect of a policy by comparing the response variable before and after the intervention. Interrupted time-series or segmented regression analysis is a popular in the health sciences to establish the effect of a drug or treatment on patient outcomes (e.g Wagner et al 2002). However, it’s generally more difficult to establish causality using these methods, so care needs to be taken when reporting results.
Quantitative methods are good at answering questions about what the effects of a policy are, but can’t really speak to why or how these effects may have occurred. Qualitative evaluation approaches allow a deeper examination of a particular case study to gain an understanding of these issues. Data is collected open-ended survey questions, interviews or focus groups (Patton 2001). For example, Coggan et al. 2013 used semi-structured interviews to understand what were the drivers of transaction costs in two offsetting schemes in Australia. While this research isn’t able to say whether one scheme is more effective than the other, we know that high transaction costs can reduce the efficiency of a policy, and an understanding of which factors are driving transaction costs can tell us where to focus our attention in reducing them.
Yin’s book on case study research is a key reference in this area, although there are a few variations considered in specific disciplines, such as Rose’s lesson-drawing in public policy, or Collier’s comparative method in political science. Studies which used mixed methods combine both quantitative and qualitative approaches to try and comprehensively tackle the what, why and how (Bamberger et al 2010).
Right, so, what does this have to do with offsets? :-) I’ll come back to that in the next post.
Leave a Reply