InAsia

Insights and Analysis

Monitoring, Evaluation, and Learning in Adaptive Programming: Expanding the State of the Art

September 12, 2018

By Ethan Geary

No matter how you slice it, implementing a project in the field of international development ultimately boils down to this: is the project helping the people it intends to reach, and how do you know? That translates directly into the essential nuts and bolts of project management and how to measure results. Are you achieving what you had hoped? How do you know? How do you measure? Is the process improving? Are you getting better at what you’re doing over time? Are people ultimately benefitting from the program?

In some cases, these questions are fairly easy to answer. If you build a road, you can measure the increase in traffic between two communities. If you build a dam, you can measure the amount of electricity produced or the number of people with a stable water supply. If you distribute mosquito nets, you hope to see malaria decline. These are generally straightforward projects with a measurable before and after.

But sometimes the causal relationship between project and result can be extremely difficult to measure, especially in the core areas of The Asia Foundation’s work. We often work toward less tangible goals, such as improved governance, or empowering women, or economic development. This takes time and is sometimes difficult to quantify. Development doesn’t take place in a petri dish, and it may take a generation to bear fruit. The Foundation is not alone in this regard, and monitoring and evaluating development projects—often referred to as monitoring, evaluation, and learning (MEL)—has gained increasing attention from development practitioners in recent years. These practitioners—NGOs, academics, donors, and implementing organizations among others—have devised empirically defensible methods to measure results and to evaluate those results at set points.

But what about efforts that are less linear in nature? What if there are unknowns at the beginning, and the path to the intended results is rapidly changing? For example, what if the desired result is an improved policy, but it is uncertain at the outset of the project how best to effect this change? In recent years, the international development community has begun to use the term “adaptive programming” to describe a methodology that continually evaluates results in real time, incorporates learning into a project at all stages, and reexamines the core logic of the program based on evidence from the field. The Asia Foundation has explored this approach—for example, in its cutting-edge Coalitions for Change (CfC) project in the Philippines, a partnership with Australia’s Department of Foreign Affairs and Trade (DFAT). The project over time has tackled issues as varied as tax reform, infrastructure priorities, overcrowding in schools, national disaster-risk reduction, and voting guidelines that promote access for disabled populations.

Eager to learn how others were approaching adaptive programming, The Asia Foundation and DFAT hosted the Practitioner’s Forum for Monitoring, Evaluation, and Learning in Adaptive Programming this past June in Manila. The forum drew a diverse group of MEL practitioners from large and small development programs around the world: implementers, advisors, researchers, international organizations, and funding partners including DFAT and USAID. While previous conferences have examined adaptive programming, this forum was unique in its focus on MEL.

The Practitioner’s Forum looked at how to develop a culture of learning in projects, adaptive approaches to failure and accountability, and how systems and practices can effectively adjust to an evolving context.

Even in adaptive programs, where learning is encouraged and practiced, participants noted that building a learning culture is challenging. One attendee commented that the challenge is not building learning so much as courageously reversing the lack of learning. Why is this the case? One presenter cited the people factor: mindsets, interests, skills, ways of doing things, and trust issues that constrain the culture of learning. Another culprit is capacity, the combination of human, technical, organizational, and financial resources for MEL. For example, a question that buzzed around the Forum was how much of the project budget to dedicate to MEL. The answer varied from 2 percent or lower to 20 percent, with an overall consensus to invest more than has previously been invested.

The concepts of failure and accountability featured prominently as well. Adaptive programming treats failure as an opportunity for course correction, and one might call it a necessity in environments with many unknowns. But then how should failure be negotiated to maintain accountability? As discussed in the Forum, failure requires flexibility and tolerance from funding agencies; and bilateral donors, who report to taxpayers, have different tolerances for risk. Participants noted that the traditional principal-agent model, in which a donor funds an implementer and the implementer reports to the donor, may need an update for adaptive programming and MEL, perhaps to a shared decision-making model that better lends itself to responsiveness.

Another forum session discussed the importance of management and team composition in adaptive programs with MEL. In a rapidly changing operational context, the hierarchical model of the project manager and the team often proves rigid and cumbersome, with a single decision-maker who can easily become a bottleneck. In adaptive MEL, decentralized decision-making improves responsiveness. This gives increased importance to team communications, because decentralized decisions need to be communicated throughout the project, and it can leave traditional mechanisms for monitoring and measuring project effectiveness struggling to catch up.

The Forum also examined the need for better ways to support the L in MEL: learning. Donors and implementers presented contrasting yet ultimately complementary views. Implementers stressed the need for flexibility in program-management tools like manuals, monitoring systems, and timelines and for new types of work planning, both internally and in relation to donor reporting. Donors, on the other hand, spoke of streamlining approvals to encourage faster decision-making by their implementing partners, and reflected on how some of their own systems could work more responsively. There was a clear consensus among all, however, that fundamental MEL systems and project-management processes need to be updated and jointly examined.

The Practitioner’s Forum did not provide conclusive answers, and it generated many new and vexing questions. But it did so from a position of intellectual honesty, humor, and concise thinking, expanding the state of the art and pushing MEL in adaptive programming toward new frontiers.

Ethan Geary is deputy country representative for The Asia Foundation in the Philippines. He can be reached at [email protected]. The views and opinions expressed here are those of the author and not those of The Asia Foundation.

Related locations: Philippines
Related topics: Coalitions for Change, International Development

0 Comments

About our blog, InAsia

InAsia is a bi-weekly in-depth, in-country resource for readers who want to stay abreast of significant events and issues shaping Asia’s development, hosted by The Asia Foundation. Drawing on the first-hand insight of renowned experts, InAsia delivers concentrated analysis on issues affecting each region of Asia, as well as Foundation-produced reports and polls.

InAsia is posted and distributed every other Wednesday evening, Pacific Time. If you have any questions, please send an email to [email protected].

Contact

For questions about InAsia, or for our cross-post and re-use policy, please send an email to [email protected].

The Asia Foundation
465 California St., 9th Floor
San Francisco, CA 94104

The Latest Across Asia

Spark creativity, joy, and a love of reading