Skip to main content

Impartial Evaluation Program

While it is nice to believe that aid organization’s well-developed plans and magnanimous donors’ billions of dollars always translate directly into more people being fed, the extremely complicated nature of aid often destroys such carefully laid plans. Ideas that make sense on paper may not produce results at all, and programs that seem less promising may far exceed expectations (Kremer). To further complicate things, it is often extremely difficult to impossible to separate the effects of specific programs from other variables which change while the programs are going on. On the other hand, it is absolutely critical that programs be evaluated in a thorough manner in order to identify the ones that have a maximal effect on solving the hunger problem. For these reasons, it is extremely important to evaluate the effectiveness of aid programs based on hard data, rather than gut feelings or intuition. Such a system of evaluation will maximize the effect of every dollar spent on aid, cutting down on the huge waste of aid money currently occurring due to ineffective or minimally effective programs.

Although it cannot be implemented in all cases, the best way to collect this type of information  is through randomized evaluations. By implementing a program in certain randomly-chosen areas, researchers are able to measure changes in these areas against changes in control areas, which provides a reliable way of measuring program effectiveness given a sufficient sample size. One proponent of this approach is the Abdul Latif Jameel Poverty Action Lab at MIT, who explains the reason for their interest in the area: “All else equal, randomized evaluations do the best job. They generate a statistically identical comparison group, and therefore produce the most accurate (unbiased) results. Or stated more strongly: other methods often produce misleading results—results that would lead policymakers to make exactly the opposite decision relative to where the truth would have directed them.” One reason that aid has failed in the past is because it has been directed where it has little effect.On the other hand, an evaluation is usually associated with a cost of about $30,000 - $100,000 (Iqbal Dhaliwal, personal communication, November 23, 2010), a number which is minimal compared with the increase in efficiency that can be derived from it.  By using randomized and other types of evaluation and adhering to its results, we can dramatically improve aid’s effectiveness.

This plan will be partially modeled after the evaluation methods of The Abdul Latif J ameel Poverty Action Lab (J-PAL). .Some of the key aspects of the J-PAL model are included in our impartial evaluation program detailed below:

I. Analysis: Evaluation organizations will follow the J-PAL model, which assesses programs in the following steps:

  1. Needs Assessment: Determining the exact problem in the area.
  2. Program Theory Assessment: Analyzing the theory behind the plan to assure that the assumptions and goals are the same.
  3. Process Evaluation: Determining the implementation of the plan and ensuring that the proper elements are in place.
  4. Program Evaluation:
  5. Is a particular intervention reaching its target population?
    1. Is the intervention being well-implemented?
    2. Are the intended services being provided?
    3. Is the intervention attaining the desired goals or benefits?
  6. Impact Evaluation: Determining the magnitude of the program’s impact on food security and malnutrition.
  7. Cost/Benefit Analyses: Analyzing the cost-efficiency, or the result produced per unit cost of the plan.
  8. Goals, Outcomes, and Measurements: Determining whether the goals of increasing food security and decreasing undernourishment have actually been reached.

II. Planning an Evaluation:

  1. Analyze the program’s goals and how the goals are to be achieved
  2. Analyze the program’s design
    1. What level of randomization does the evaluation require?
      1. What unit does the program target for treatment? (Household, apartment block, village, etc.)
      2. What is the unit of analysis? (Person, household, etc.)
      3. Ethical considerations/ Maximizing Fairness of Trials
      4. Political Feasibility
      5. Logistical Feasibility
      6. Sample size and reliability of results (Bigger samples give better results. A sample size of about 50 schools in each group will yield statistically significant results.)
    2. Methods of Random Implementation:
      1. Lottery
      2. On-the-spot decision
      3. Stratification (dividing the group into sub-groups and then randomizing in those small groups).
  3. Problems with implementation should be considered.
    1. Attrition: People may drop out of the evaluation, reducing its accuracy.
    2. Spillovers and Crossovers: Spillovers occur when the control group is affected indirectly by the program. Crossovers occur when individuals in the control group are directly affected by the program.
  4. Analysis of Results
    1. Using several outcome measures instead of one to allow us to identify the measure with the most significant different between the experimental group and control group.
    2. Sub-group analysis: Looking at sub-groups in isolation rather than as a whole in the group.
  5. Next, conclusions can be drawn about the program. The conclusions will be internally valid, or valid for the region in which the evaluation is performed, but not necessarily externally valid, or valid for any other regions. However, data analysis from the evaluation could predict the effectiveness of the program in other circumstances.

Implementation of Findings:
The solutions we have suggested cannot be installed immediately it in every hunger-afflicted region in the world. As solutions are being introduced, randomized trials will be run in specific regions to determine their effectiveness and their scalability. After analyzing programs using randomized evaluations in order to determine the effectiveness of their policies, this program will also provide advice to policy-makers and governments about how to design new effective programs using the data gained from analyzing other previously implemented programs. 
 

Go to link: http://www.povertyactionlab.org/ - Change - Remove
Works cited: 

Abdul Latif Jameel Poverty Action Lab. (2010). Methodology. Retrieved November 23, 2010, from http://www.povertyactionlab.org/

Kremer, M. (2003). Randomized evaluations of educational programs in developing countries: Some lessons. American Economic Review, 93(2). 102-06. Retrieved November 10, 2010 from http://www.jstor.org/stable/3132208?seq=1