This is Part 3 of a series of posts outlining the research journey of my PhD so far.
In my last post, I gave an overview of the many different approaches which can be used to evaluate environmental policies.
I think it’s pretty well established that evaluation is an incredibly important activity, as it’s really the only way to find out whether or not a policy is meeting it’s intended goals. Not only whether the these goals are being met (effectiveness), but whether or not the policy is efficient, if any unintended outcomes have occurred, and whether there are any opportunities for policy improvement.
A number of keys papers have examined the use of evaluation in the environmental and conservation policy spaces in the last 10 years, and many have provided guidance on how to conduct evaluations in different circumstances. Possibly the most prominent example is Ferraro and Pattanayak’s 2006 “Money for Nothing” paper, which introduced key concepts and terms from the evaluation literature to a broader audience, and called for a greater focus on evaluating the impact of conservation policies. More recently, Keene and Pullin (2011) argued for an “effectiveness revolution” in environmental management, and Miteva and others (2012) called for what they termed “Conservation Evaluation 2.0” – more urgent efforts to evaluate conservation policies in more locations.
In addition to providing useful background information and pointers to the exhaustive literature available on how to conduct an evaluation, each of these papers and others have highlighted the need for a greater understanding of the effectiveness of environmental policies. So, we have the know-how, and the need to know whether the huge investment into conservation globally is achieving results – surely we should be doing evaluation all the time, right?
Unfortunately, this is not really the case. In their review of studies which have evaluated the performance of conservation policies, Miteva and others (2012) found that “…credible evaluations of common conservation instruments continue to be rare”.
I think it’s important to note that Miteva and colleagues have a fairly strict definition of what they consider to be a credible evaluation – their particular interest was in studies which were able to infer the causal impacts of a policy, using quantitative, experimental and quasi-experimental designs. There is debate in the literature as to whether causality should necessarily be the key focus of all evaluations, as there are other policy criteria which can be subject to evaluation – for example, was the process transparent, was there adequate consultation, etc. Notwithstanding this point, there is still a dearth of evaluation research in the conservation policy space – both quantitative and qualitative.
So, given all that, why on earth are we not evaluating more often?
1) It’s (methodologically) hard
Let’s face it, it’s pretty hard to do an evaluation if you don’t have any data available to do such analysis. And when we’re working in the environmental space, collecting suitable data and doing an evaluation just becomes even harder. Ecological systems are incredibly complex and difficult to measure – there are long time-lags, non-linear responses, and very large spatial scales to consider – to name just a few issues.
In addition to these complexities, it’s also been argued that a lack of familiarity of evaluation techniques is a key barrier:
…ecological scientists are often not familiar with impact evaluation methodologies; and environmental policy is one of the most difficult areas in which to conduct credible evaluations. Ferraro 2009, New Directions for Evaluation
That being said, ecological scientists should not necessarily need to be familiar with impact evaluation methodologies – ideally, interdisciplinary teams which can draw upon data and expertise from multiple sources – ecological, geographic, socio-economic, demographic, and institutional – would be best suited to tackle such evaluation challenges. But as Ferraro and Pattanayak (2006) point out, such interdisciplinary efforts can be rare (see also this paper).
2) Shouldn’t we be spending money on management, rather than monitoring (or evaluating)?
In the face of scarce data, it’s often easy to recommend that we just need to make a greater effort to collect it. In some cases this may be true, but I think it’s often more complex than this, and we need to think carefully about the time and resources required to collect data and whether those resources could be better used for other activities (see our recent paper in Conservation Biology for a discussion of this).
There’s a valid argument about how we should balance the resources devoted to monitoring and evaluation, versus actual on-ground management (see Eve McDonald-Madden’s 2010 paper on why monitoring doesn’t always count). There’s little point to monitoring if there’s no money left to implement a policy, but there are risks associated with doing management without any monitoring and evaluation. In rare cases, we may be so confident in the effectiveness of a management action that there’s actually little to be achieved by monitoring and evaluation. But, most of the time, monitoring and evaluating the implementation of a policy and associated management actions is important so that we know whether the policy is effective, efficient, and equitable.
3) Many actors, and many different objectives
Environmental policy doesn’t exist in a rational, value-free vacuum where the only considerations are the environmental assets we’re trying to protect or manage. It operates within a complex socio-political system, containing a range of individuals and organisations with varying motivations and objectives.
Organisations who are responsible for designing and implementing policies, whether they be government or non-government, can be reluctant to evaluate how successful those policies were, as it opens them up to criticism if the policy was not successful. There often needs to be a lot of political will and a strong commitment to accountability for evaluation to occur.
In the conservation sector, there’s a tendency to promote successes and bury failures (Redford & Taber 2000), and the perception or reality that the data or strategies of an organisation are proprietary can lead to the reluctance or inability to publish such evaluations (Curtis et al 1998). There’s often little incentive to work together, in a world where competitive advantage between businesses (and NGOs competing for donor funding) is key, and where ideological and cultural differences can create tensions.
Neither conservation organizations nor donors have so far created a culture in which critical evaluation of outcomes is seen as desirable in its own right. Both individual and institutional concerns about exposing shortcomings have served as strong disincentives for critical evaluation and sharing of experience. Kapos et al. 2008, Conservation Letters

And finally, when I raised this issue about the lack of evaluation of environmental policy at a lab group discussion, one colleague told me that:“You don’t make friends with salad. That doesn’t make it bad, though”. Perhaps evaluation just not a sexy thing to do academically? Are you more likely to get published in a top journal by presenting a new idea or cool way to analyse something, rather than (what is perhaps a fairly dry) an analysis of policy efficacy? I’m not sure, but I think it’s a reasonable point, and it gives me an excuse to quote Homer Simpson in my blog.
So, these are some issues which are considered as possible barriers to evaluation of environmental policy in general. What about biodiversity offsetting specifically? Are there any unique challenges to consider?
In the next post, I’ll describe the next stage of my PhD research, which will be looking more closely into the range of issues which may be acting as barriers to evaluating the success or otherwise of biodiversity offset policy, and by extension, what barriers may exist to its successful implementation.
Leave a Reply