Progressively improving upstream conservation initiatives: What’s the role of impact evaluation?

Many of us hear phrases like “global climate change,” “6th mass extinction,” and “unsustainable consumption” and shudder with fear and a sense of impotence. These problems are enormous and far-reaching. However, we take solace in the idea that people are working on solving these problems: organizations like the World Wildlife Fund, The Nature Conservancy, Conservation International, and various United Nations agencies are dedicated to protecting nature and conserving biodiversity, creating a safe living space for humanity. Even more heartening is the recent emergence of global initiatives like the Paris Climate Agreement, the Convention on Biological Diversity, the UN Sustainable Development Goals, and the growing number of zero-deforestation commitments (ZDCs) by leading companies. While these initiatives are sometimes criticized as overly ambitious and non-binding, they represent a wide commitment to addressing the scariest global environmental problems.  

What often gets lost is that behind these ambitious environmental efforts are thousands of smaller initiatives. Among these are traditional conservation projects, but also a growing number of initiatives which aim to influence the behavior of actors one or more steps removed from “on the ground” efforts. This second type of effort, which we call Upstream Conservation Initiatives (UCIs), are critical to the success of the ambitious global targets mentioned earlier. After all, it would be impossible to reduce CO2 emissions without changing corporate behavior; it would be impossible to eliminate deforestation without reducing demand for deforestation-linked products. Like more direct conservation efforts, UCIs are situated within a complex ecosystem of NGOs, donors, government agencies, international funds, and multilateral initiatives.

Figure 1: UCIs apply to actors one degree or more removed from conservation outcomes like deforestation

Over the past decade, we’ve had the opportunity to work with many of these actors and witness how UCIs operate to achieve downstream goals. Unfortunately, we have come to believe that many if not most of the UCIs meant to catalyze a paradigm shift toward sustainability are not working. Even worse, most do not learn from their successes or failures, so projects do not improve over time.

This realization brought our team together: Gino Bianco, drawing from his seven years of experience at the Rainforest Alliance and four years with the World Wildlife Fund, and Danny Tobin, who works on setting up impact evaluations for international conservation and development programs as part of his thesis at Duke’s School of the Environment. We came together with the idea that, while many well-intentioned people are involved in UCIs, it is unclear whether these projects are achieving their intended results. Further, few initiatives generate learning that could enable future projects to improve and/or scale efforts.

In between our day jobs, we bounced ideas off each other, reviewed the existing literature, and compared it our own professional experiences. This iterative process culminated in writing the paper, “Harnessing Impact Evaluation to Build Evidence in Upstream Conservation Initiatives” published in Biological Conservation. In the paper, we proposed the following:

  1. The impact evaluation (IE) discipline offers a proven toolkit for UCIs to generate evidence, enabling progressive improvement in design. IEs are not the only way to build high-quality evidence, but they are severely underutilized in the environmental field compared to the health, education, and poverty alleviation fields. In UCIs specifically, they are virtually unheard of. There is endless speculation as to why, but often it boils down to practitioners not knowing that these methods exist or dismissing them as too costly or too complex to implement. Mainstreaming IE in projects where it would be worth the cost will require new project designs and existing partnerships that bring together academia, donors, and implementers.

  2. Building evidence—through IE or other methods—requires documenting and sharing information. UCIs will struggle to build evidence if they cannot learn from past successes and failures. Part of the issue is that incentive structures for UCIs do not reward the collection, documentation, and sharing of rigorous evidence. Rather than making continued funding contingent on meeting indicator targets, donors should reward implementers that follow good processes and leave behind high-quality documentation, even if the project’s targets are not met.   

  3. Even when IE is not feasible for identifying the causal effects of a UCI, several elements of IE thinking can improve standard monitoring, evaluation, and learning (MEL). The purpose of MEL systems is “performance measurement”, i.e. to track progress toward indicators to inform adaptive management. Yet, we often ask the same systems to help us draw conclusions about impact; MEL is not the right tool for that. However, by incorporating ideas from impact evaluation, we can strengthen the ability of MEL systems both to adaptively manage projects, and to leave behind some evidence. This is not a substitute for a more robust evidence-implementation system, but a stopgap for organizations interested in improving evidence quality who do not have the resources or internal capacity to conduct proper IE.

  4. Large international UCIs often resemble “waterfall projects” with long, slow-moving, inflexible stages that struggle to adapt to new information. When they do adapt, it is often through haphazard approaches, which quickly jump from strategy to strategy, or kitchen-sink approaches that try all strategies at once. While this is understandable mindset in urgent situations which may require “all hands on deck”, this style of project implementation prevents learning. A better approach may be to more agile, using the scientific method to repeatedly test smaller changes, observe results, and make informed improvements. 

We apply these ideas to two UCI case studies from the WWF-led Demand Project: 1) a consumer awareness campaign aimed at shifting consumption away from unsustainable palm oil, and 2) an investor awareness-raising and training program designed to redirect investment away from deforestation-linked palm oil production.

In both UCIs, several common issues prevented us from learning how well they performed or whether they could serve as templates for other contexts or be scaled up in the same context:

  • Neither project had clearly defined, stable interventions or well-defined, time-bound expected outcomes (what we describe in the paper as “consistency”)

  • Neither had valid control groups (i.e. comparable groups that did not receive the training/information).

  • Neither had consistent selection criteria or target populations for the whole intervention.  

  • Neither could attribute changes (positive or negative) to the project because they did not use methods that would enable a comparison of the observed results to what would have occurred in the counterfactual (of having no intervention at all).

As a result of these issues, it was difficult to determine which aspects of the UCIs succeeded, which failed, and what could be learned from either initiative. The consumer campaign found a "null effect," while the investor campaign was deemed successful due to improvement in performance measurement indicators. Neither of these conclusions withstand rigorous scrutiny.

While these UCIs may have had positive effects, we cannot attribute positive or negative outcomes to the projects’ efforts based on the recorded data, the methodologies used, and the public reports filed. Moreover, it is unclear whether there was a plan to use information from these projects for future steps: Were the projects meant to end there? Could they have informed other projects? If successful, could they have been repeated or scaled? 

These were missed opportunities to determine if the multi-million-dollar efforts led by a strong organization were working. This issue is not the fault of the WWF or the people who set up the project, but rather a result of a systematic undervaluing of evidence from projects. We used WWF as a case because we had access, but, from our view, the learning and evidence systems set up in this project were average or above average.

So how could it have been better? Through diagrams and explanations, we offer some ideas for how impact evaluation thinking could be integrated into the design, implementation, and learning processes of UCIs:

Figure 2: Impact Evaluation Guidance Process flow chart. The numbers in red boxes denote the suggested order of steps in the process. These steps are explained in the paper (for interpretation of the references to color in this figure legend, the reader is referred to the web version of the article).

Finally, we conclude with some suggestions for structural changes that may be necessary, such as redesigned project documents and explicit efforts to link academia, donors, and implementors. These changes could make high-quality evidence and progressively improving projects the norm rather than the exception.

It’s important to note that the ideas presented in our paper are applicable beyond UCIs. We coined the phrase “upstream conservation initiatives” to encompass a growing number of projects on our radar and in our day-to-day work. However, the same lessons could be applied to any complex initiative, environmental or not. These include sustainable development projects such as payment for environmental services, jurisdictional conservation projects, certification programs, etc. 

We see this work as the beginning rather than the end of a much larger effort to correct this system. We echo the calls of many before us, including Paul Ferraro, Subhrendu Pattanayak, Marc Jeuland, Madeleine McKinnon, Nick Salafsky, Richard Margoluis, and Michael Mascia, who have highlighted the issue of low-quality evidence from conservation projects. Standing on their shoulders, we call for better evidence and implementation incentive systems that foster progressive improvement.

We welcome your comments and critiques, and we hope you will join us in creating a better implementation-evidence system. Read the full paper here and feel free to email the authors if you have trouble accessing it (daniel.tobin@duke.edu; gino.bianco@wwfus.org).