Academics, Practitioners, and Donors: Whose Evidence Counts and For What?
December 10, 2014
There is a difficult tension in the evidence-seeking agenda: on the one hand, donors seek short-term, project-related outcomes to support claims about their impact on a grand scale in a society; on the other hand, society-level impact does not seem measurable for those who seek to understand how short-term projects can contribute to longer-term impact. This begs the question: how is evidence conceived and generated so that it can be useful in knowing the change in wellbeing and progress in a society?
The Theory of Change movement has further confounded this tension: development actors are to articulate how their work will impact a given development problem and be evaluated on that impact. On the face of it, it is a reasonable expectation; but the assumption that evidence of the impact in question is available, correctly conceived, and generated in a useable manner can be flawed. Too, development actors witness outcomes – intended or otherwise – that are either difficult or impossible to measure using today’s lexicon of evidence. For example, how does one measure “empowerment?” What constitutes useful evidence for “social accountability?”
Like all international development organizations, at The Asia Foundation we have struggled to deal with this tension. The dynamics of most development contexts we work in require investment in short-term gains that need to be strung together and parlayed into longer-term gains. This requires effort to navigate and shepherd variegated, measurable activities that add up to intended and unintended impact. It also requires an openness to understand impact in local terms such that a combination of data and interpretive narrative can provide the evidence that helps donors and others understand that progress is being achieved.
There is also a struggle between academics (who conduct research and provide “evidence” for donors) and practitioners (who implement programs and vie for donors’ funds) on what programmatic outputs constitute evidence, and how broad or narrow that can be. Academics tend to be narrower in their definition and scope of evidence. Practitioners, on the other hand, often work in complex, dynamic situations with interdependent communities of people, interests, and relationships. As a result, practice-based views on what constitutes evidence is often more relaxed and broad.
This is not to say that the academics are wrong and practitioners are right, or vice-versa. However, donors who look at academic theories and definitions as strict guidelines to judge practitioners’ work would benefit from being more cognizant of the conditions in which practitioners work with greater appreciation that a rural community in a post-conflict country, for example, is not a laboratory setting. Therefore, the generated or observed phenomena may not completely satisfy an academic theory or definition of evidence.
For a DFID-funded regional study of peacebuilding programs, The Asia Foundation’s Nepal office offered a Theory of Change for its long-standing community mediation program, claiming “social harmony” as the overall programmatic outcome.
The London School of Economics (LSE) studied the program in 2012 and 2013 and argued that our claim lacked evidence. In 2014, we counter argued. Extant social capital theory suggests that successful outcomes on social objectives are more likely if communities engage more “civically.” We emphasized that the mediation program generates social trust and norms in peacebuilding that would qualify as evidence in support of, for example, Robert Putnam’s theory of social capital. We also argued that the institutional support provided by the mediation program in the program locations has contributed to both individual and societal-level changes that have helped reduce longstanding prejudice. This evidence is consistent with Allport’s “contact” hypothesis. However, it is difficult to fit the variegated practice-based, observed phenomena as evidence for a single theory. Allport’s hypothesis or Putnam’s theory, by themselves, would fail to capture the overall impact of the Foundation’s mediation program. Rather, from our practitioners’ perspective, many evidence-like observations aggregate and result in something greater that we claim is in fact “social harmony.”
Additionally, it is often the case that good evidence is intertemporal in nature. This is especially true in peacebuilding and governance programs. Some of the evidence is proximate and immediately observable. Our mediation program immediately boosted the number of Dalit and indigenous mediators in new program locations in Nepal. Thus, previously overlooked and underrepresented groups suddenly had visible and important roles to play in the communities. This rapid change posed an immediate challenge to the status quo, and threatened the power of local elites. However, more robust evidence took some time to present. Horizontal flow of social harmony occurred over the years after neighboring villages observed the success of the mediation program, and started demanding the program for their own villages. Vertical flow of harmony took over a decade and manifested itself in the form of a national Mediation Act which came in force on April 14, 2014. Therefore, the more “valuable” or “acceptable” evidence takes time – in our case, well over a decade – to gestate and present.
There is also the case that evidence is not necessarily a product of a singular program or intervention. The Asia Foundation, alongside many other actors including the Supreme Court of Nepal, advocated for the Mediation Act for years. However, academics typically do not capture these dimensions in their evidence-seeking exercises. In an attempt to simplify a model or to allow broader application of a theory, the multi-layered, intertemporal, and multi-agent nature of evidence is either oversimplified or ignored entirely. As a result, the privileging of evidence derived from academic exercise instead of practice has resulted in a gross distortion in perspective and discourse on appraising wellbeing and progress.
There are ways to resolve such tensions in the evidence-seeking agenda. In an earlier piece, we argued that the evidence-based action agenda would benefit from practice-based inputs on the realities of evidence uptake and use in lesser-developed countries such as Nepal. One way to remedy the dislocation of practitioners in the realm of evidence generation is by employing tools of deliberative communication and facilitation that can generate “workable” evidence without expending large resources. The Asia Foundation and local non-profit partner, Niti Foundation, adapted such a tool for the Nepali context, called the “Policy Lab,” where development practitioners together with academicians and researchers can collectively focus on their strengths of evidence generation and analysis to make better-informed decisions.
This collective effort to link together information on short-term gains with interpretive narratives that interrogate longer-term impacts can address partially the challenge of seeking adequate evidence on impact. Creative approaches and methods are to be encouraged for collaborative exchange among academics, practitioners, and donors on how to derive evidence for the complex changes manifested when development aid is deployed.
George Varughese is The Asia Foundation’s country representative in Nepal. Mukesh Khanal is a former program officer for the Foundation and is currently pursuing a graduate degree in Public Policy at the University of Calgary. Varughese can be reached at [email protected]. The views and opinions expressed here are those of the individual authors and not those of The Asia Foundation.
About our blog, InAsia
InAsia is posted and distributed every other Wednesday evening, Pacific Time. If you have any questions, please send an email to [email protected].
The Latest Across Asia
February 21, 2024
February 15, 2024
February 7, 2024
January 29, 2024
January 26, 2024
January 23, 2024