You wouldn’t drive your car without the control panel, would you? For the same reason you shouldn’t drive your social program without understanding if your actions are improving the lives of your target audience.

That´s simple. And that’s the reason why every day more and more is talked about Impact Assessment. It’s not just measuring how many books were distributed or how many hours of classes were taught in a support initiative, but measuring the decrease of illiteracy in that area, and if this was due to the program or to other causes.

A tight impact evaluation program greatly increases the transparency of social projects, showing if the funds help the recipients or if they serve just for the self-preservation of the public or private managing organization. For the reality of the non-profit sector, this is becoming fundamental even for the fundraising and for the project’s sustainability. A recently study about philanthropy in Latin America by Hauser Institute of Harvard Kennedy School clearly shows that, with focus moving from charity to change, many resources only become available if the plan can quantitatively show the results to be reached.

Implementing such plans isn’t neither easy nor cheap. NGO’s and public organizations usually don’t have the necessary statistical competencies and the resources for the data collection.

Although being a small venture, we at Aventura de Construir are obsessed about accomplishing real improvement in the lives of low-income entrepreneurs from São Paulo’s suburbs, acting with maximum professionalism, realism and chariness.

We started our work in 2012 with a questionnaire involving around 150 people to understand the main non-attended needs, since the beginning we listed all the contacts with the target audience to reach management indicators (“How many people participate in our training sections? How many of them have participated one, two or three times?”, “How many were helped in an individual consultancy?”), and – after this first step – we started our program of impact evaluation, with the assistance of Kellogg Institute from Notre Dame University, of ALTIS from Catholic University of Milan and Comunitas, from São Paulo. To know more information about our impact evaluation program, read here.

Here are some tips we want to share about this challenging work:

1. Think carefully if you want to do it: we’ve spent 9 months to define and write down our baseline, and more two weeks/man at each 6 months to replicate the questionnaire to 70 of our entrepreneurs and to 40 entrepreneurs from the control group.

2. Hire a statistician: sooner or later you will need help, and it’s better to be safe. The statistician’s work is fundamental not only for the analysis of the data, but beforehand, when defining what and how to measure.

3. Whether or not you hire a statistician, read “World Bann Group – Impact Evaluation: It’s a free, produced by World Bank Group, simple guide that everyone can understand;

4. Define your process in advance: how to deal with outliers? How to replace people who leave the groups of impact or control? If you only react along the way, the temptation to adapt the rules to the desired results is strong, and the external evaluators will always suspect.

5. Careful with confusing factors: if the revenues got better, it could as well be the result of your work or the general improvement of the economy. Without a well-chosen control group it’s not worth doing an impact evaluation.

6. Invest on initial data collection (the baseline): without a starting point is impossible to measure progress. In number of questions or in numbers of interviewed people, is always better to err on the side of precaution: you will always find unexpected things.

7. Be kind to your interviewees at the periodical evaluation: limit the maximum you can the number of questions. To steal more than 5 minutes from them is rude, and it increases the number of people who leave the questionnaire (and this will affect the quality of your results).

8. Make simple questions, with objective answers: Don’t ask “Do you like to read?”, but “How many books have you read in the last month?”

9. Consider a warm-up period: In the beginning, you will have many unexpected problems (in collection, in elaboration, in interpretation) that must be set after producing comparable results. We’ve made 3 survey tests three months apart from each other after considering a consolidated program.

10. In the analysis, don’t expect from the numbers more than what they can offer:

a. “Not everything that is important can be measured, and not everything that can be measured is important”: the purpose of impact assessment is to relate causes and effects. The numbers can’t do anything for it: it can prove or disprove a hypothesis, but can never create one from scratch. So, don’t forget to stay tuned in your audience and collect qualitative information. It will help in your insight.

b. The first learned thing on technical education is that every measure has its flaws. If it goes for measuring rocks, imagine for human responses!

c. Careful with the averages: before coming up with conclusions, have a good look at all points. Strange data (or a transcription error!) or averages can tell a story that is completely different from reality.

Last note: Goodhart, an English economist, became famous for his law: “When a measure becames a target, it ceases to be a good measure”. With all pressure on results, there is a risk – conscious or not – for the improvement of indicators rather than the target audience’s well-being. Ethics is also remembering that numbers describe reality, but can never take its place.

 

Lucas Bizerra