While it is nice to believe that aid organization’s well-developed plans and magnanimous donors’ billions of dollars always translate directly into more people being fed, the extremely complicated nature of aid often destroys such carefully laid plans. Ideas that make sense on paper may not produce results at all, and programs that seem less promising may far exceed expectations (Kremer). To further complicate things, it is often extremely difficult to impossible to separate the effects of specific programs from other variables which change while the programs are going on. On the other hand, it is absolutely critical that programs be evaluated in a thorough manner in order to identify the ones that have a maximal effect on solving the hunger problem. For these reasons, it is extremely important to evaluate the effectiveness of aid programs based on hard data, rather than gut feelings or intuition. Such a system of evaluation will maximize the effect of every dollar spent on aid, cutting down on the huge waste of aid money currently occurring due to ineffective or minimally effective programs.
Although it cannot be implemented in all cases, the best way to collect this type of information is through randomized evaluations. By implementing a program in certain randomly-chosen areas, researchers are able to measure changes in these areas against changes in control areas, which provides a reliable way of measuring program effectiveness given a sufficient sample size. One proponent of this approach is the Abdul Latif Jameel Poverty Action Lab at MIT, who explains the reason for their interest in the area: “All else equal, randomized evaluations do the best job. They generate a statistically identical comparison group, and therefore produce the most accurate (unbiased) results. Or stated more strongly: other methods often produce misleading results—results that would lead policymakers to make exactly the opposite decision relative to where the truth would have directed them.” One reason that aid has failed in the past is because it has been directed where it has little effect.On the other hand, an evaluation is usually associated with a cost of about $30,000 - $100,000 (Iqbal Dhaliwal, personal communication, November 23, 2010), a number which is minimal compared with the increase in efficiency that can be derived from it. By using randomized and other types of evaluation and adhering to its results, we can dramatically improve aid’s effectiveness.
This plan will be partially modeled after the evaluation methods of The Abdul Latif J ameel Poverty Action Lab (J-PAL). .Some of the key aspects of the J-PAL model are included in our impartial evaluation program detailed below:
I. Analysis: Evaluation organizations will follow the J-PAL model, which assesses programs in the following steps:
- Needs Assessment: Determining the exact problem in the area.
- Program Theory Assessment: Analyzing the theory behind the plan to assure that the assumptions and goals are the same.
- Process Evaluation: Determining the implementation of the plan and ensuring that the proper elements are in place.
- Program Evaluation:
- Is a particular intervention reaching its target population?
- Is the intervention being well-implemented?
- Are the intended services being provided?
- Is the intervention attaining the desired goals or benefits?
- Impact Evaluation: Determining the magnitude of the program’s impact on food security and malnutrition.
- Cost/Benefit Analyses: Analyzing the cost-efficiency, or the result produced per unit cost of the plan.
- Goals, Outcomes, and Measurements: Determining whether the goals of increasing food security and decreasing undernourishment have actually been reached.
II. Planning an Evaluation:
- Analyze the program’s goals and how the goals are to be achieved
- Analyze the program’s design
- What level of randomization does the evaluation require?
- What unit does the program target for treatment? (Household, apartment block, village, etc.)
- What is the unit of analysis? (Person, household, etc.)
- Ethical considerations/ Maximizing Fairness of Trials
- Political Feasibility
- Logistical Feasibility
- Sample size and reliability of results (Bigger samples give better results. A sample size of about 50 schools in each group will yield statistically significant results.)
- Methods of Random Implementation:
- On-the-spot decision
- Stratification (dividing the group into sub-groups and then randomizing in those small groups).
- What level of randomization does the evaluation require?
- Problems with implementation should be considered.
- Attrition: People may drop out of the evaluation, reducing its accuracy.
- Spillovers and Crossovers: Spillovers occur when the control group is affected indirectly by the program. Crossovers occur when individuals in the control group are directly affected by the program.
- Analysis of Results
- Using several outcome measures instead of one to allow us to identify the measure with the most significant different between the experimental group and control group.
- Sub-group analysis: Looking at sub-groups in isolation rather than as a whole in the group.
- Next, conclusions can be drawn about the program. The conclusions will be internally valid, or valid for the region in which the evaluation is performed, but not necessarily externally valid, or valid for any other regions. However, data analysis from the evaluation could predict the effectiveness of the program in other circumstances.
Implementation of Findings:
The solutions we have suggested cannot be installed immediately it in every hunger-afflicted region in the world. As solutions are being introduced, randomized trials will be run in specific regions to determine their effectiveness and their scalability. After analyzing programs using randomized evaluations in order to determine the effectiveness of their policies, this program will also provide advice to policy-makers and governments about how to design new effective programs using the data gained from analyzing other previously implemented programs.
Abdul Latif Jameel Poverty Action Lab. (2010). Methodology. Retrieved November 23, 2010, from http://www.povertyactionlab.org/
Kremer, M. (2003). Randomized evaluations of educational programs in developing countries: Some lessons. American Economic Review, 93(2). 102-06. Retrieved November 10, 2010 from http://www.jstor.org/stable/3132208?seq=1