CBO’s and Soc. Movs. own PME - learning systems

155 CHAPTER SEVEN: STAYING ALIVE TO CHANGE Outcome mapping BEHIND OUTCOME MAPPING Many of us are familiar with the pressure to monitor and evaluate our work in order to assess whether our efforts have made people’s lives better. Often this kind of change or ‘impact’ is far from where we work and depends on the actions of others too. The path- ways linking our work to the better world we seek have many twists and turns – as well as many other travellers. We work in complex situations where sustained changes in well-being usually result from multiple factors, both positive and negative. Disentangling our contribution from other infl uences is a major chal- lenge in all monitoring and evaluation. For example, an evaluation may identify an improvement in the health of children two years after the initiation of a child health education programme with tradi- tional healers. But can we say that the programme was the only cause of the improvement without researching everything else that happened in the lives of the children over the two years? Our work with traditional healers may have been a factor in the health improvement but it is possible that it was only a small and insignifi cant contri- bution amongst several other more important factors. Improved crop yields, vaccinations, increased use of bed nets, none of which may be connected to our work, can make huge contributions to the health of children. Or our programme may have been signifi cant only because of the contributions of other interventions – perhaps a change in the way the Ministry of Health allocates its resources. EVALUATING IMPACT Measuring the causes of ‘impact’ within the complex processes of development can require research resources and skills far beyond the capacity of a programme’s monitoring and evaluation ME activities. In fact, using our limited monitoring and evaluation systems for impact evaluation can be dangerously misleading if we do not recognize and understand the importance of other signifi cant contributions. While logframes and other logic models may be useful for simplifying and summarizing the rationale and components of a program for communicating with some audiences, they do not offer an adequate basis for moni- toring and evaluation. They often offer false hope that a single program can actually achieve ‘impact’ all by itself. The simplicity of logic models can help us illustrate how a particular interven- tion is supposed to work. However, when it comes to measuring our results, this simplicity often misleads us by leaving out the emergent, complex, web-like or even circular ways that social change and organisational transformation really happen. Unfortunately, many pro- grammes are required, by their sponsoring organizations, to use monitoring and evaluation to “prove” that their efforts have brought about lasting changes for poor people. The time and effort that goes into this tends to distract us from a deeper understanding of the messiness of development, from exploring and learning how to “improve” the way we work within our organisations and communities. OUTCOME MAPPING AS A DEVELOPMENTAL ALTERNATIVE Originally developed by the International Development Research Centre in Ottawa, Canada, Outcome Mapping is based on twenty years of learning from fi eld work in many corners of the world . Here are a few words from IDRC about OM: A DEVELOPMENTAL APPROACH TO PME The focus of Outcome Mapping is on people and organizations. The originality of the methodology is its shift away from assessing the products of a program e.g., policy relevance, poverty alleviation, reduced confl ict to focus on changes in behaviours, relationships, actions, andor activities of the people and organizations with whom a development program works directly. ” “ by Christine Mylks, Terry Smutylo and Doug Reeler