By Susan Davis, Executive Director, Improve International

Last week I attended the 1st Pan-Asia-Africa Learn MandE Conference in Bangkok. (M & E = monitoring and evaluation; in this case it was for international development programs). The conference was well organized and there was a good mix of implementing organizations, academics, donors, and a few software folks. I liked the size of the conference because all of us were able to attend all the sessions – no rushing around to other rooms to find particular topics of interest.
Marla Smith-Nilson and I presented on the Accountability Forum / WASH Sustainability Rating. We were surprised to find that we were among the very few people talking about 1) independent evaluations and 2) doing evaluations years after program completion.
Other presenters talked about a dizzying array of acronyms, methods and buzzwords – results based monitoring, outcome mapping, data flow diagrams, causality, value for money. I learned a great deal about M&E. I also learned that for most organizations, the M&E ends when the program ends. Because that’s when the funding ends.
I just don’t think you can measure the true results / outcomes / causality / value of any program while you are doing the program. Based on comments and questions from others at the conference, many them understand this and were frustrated by the resource limitations and lack of donor interest.
I also sat on a panel where we discussed our thoughts on the statement “Monitoring and evaluation is preoccupied with reporting to donors rather than ensuring projects make a valuable contribution to host communities.” Most of the panelists (except for the CIDA representative) agreed that yes, monitoring and evaluation is mostly about reporting. One panelist suggested that organizations could do a better job explaining to donors how monitoring and evaluation are integral to success and learning. I suggested that we have a huge rich database of past international development projects to review. Rather than trying to guess at what indicators might predict success, we could analyze projects that have led to lasting results and identify what factors contributed to that. This would save us all some time in program design and monitoring.
Many would ask, who would do that? Who would pay for it? I would ask instead, if we really are trying to improve the lives of poor people, how can we afford not to do it?
Links to all of the presentations, including ours, can be found here.
Leave a reply to Welcome to Learn MandE | Learn MandE Cancel reply