Program Objectives

This course is designed for people from a variety of backgrounds: managers and researchers from international development organizations, foundations, governments and non-governmental organizations from around the world, as well as trained economists looking to retool.

Course Coverage

Specifically, the following key questions and concepts will be covered:

  • Why and when is a rigorous evaluation of social impact needed?
  • The common pitfalls of evaluations, and why does randomization help.
  • The key components of a good randomized evaluation design.
  • Alternative techniques for incorporating randomization into project design.
  • How do you determine the appropriate sample size, measure outcomes, and manage data?
  • Guarding against threats that may undermine the integrity of the results.
  • Techniques for the analysis and interpretation of results.
  • How to maximize policy impact and test external validity.

The program will achieve these goals through a diverse set of integrated teaching methods. Expert researchers will provide both theoretical and example-based classes complemented by workgroups where participants can apply key concepts to real world examples.


  1. Introduction to effective evaluations

    A program evaluation attempts to answer the basic question: How would an individual have fared in the absence of the program? This unit examines the various methods of program evaluation including both non-randomized “retrospective” evaluations (such as difference-in-differences estimation, multivariate regression, and panel regression) and randomized evaluations. The measured impact from alternative methods can be inconsistent, resulting in opposing policy implications. This unit lays out the superiority of randomized evaluations in producing reliable impact estimates.

    1. Why evaluations matter
    2. Different evaluation types
    3. Why randomized evaluations are the gold standard 
  2. Methods of Randomization

    This unit introduces the design stage of a randomized evaluation. It focuses on inventive and elegant ways to add randomization into a program, given practical, budgetary, and political constraints faced by researchers and program implementers.

    1. Specifying the program to be studied
    2. Determining the level of intervention: Individual, School, Country, or other
    3. Choosing the method of randomization
    4. Randomizing in the real world 
  3. Evaluation Design

    The process of designing an evaluation can help policymakers and implementers critically analyze their program to both pinpoint its key objectives and identify indicators that can measure those objectives. A well-designed evaluation is meant to answer not only the question of what was the program’s impact, but also, how did that impact occur? This unit reviews common practices for designing survey instruments, and determining the necessary sample size in order to detect an effect.

    1. Choosing objectives
    2. Identifying what variables to survey
    3. Selecting the population and calculating sample size
    4. Common pitfalls and their solutions 
  4. Implementation

    Impact estimates are useful only if the evaluation is implemented correctly and the resulting data are properly analyzed. Participants will review problems frequently encountered when implementing and analyzing randomized evaluations.

    1. Determining if an estimate is spurious or significant
    2. Attrition of study subjects
    3. Non-compliance and “contamination” of treatment/control designation
    4. Troubleshooting problems
    5. Hawthorne and John Henry effects
    6. Intention to Treat and Treatment on Treated

Teaching Methods

We will present material through a combination of interactive lectures, case studies, and relevant exercises. Participants will have group time to discuss cases with one another prior to lectures, as well as work jointly through a set of preparatory exercises designed to focus attention on key points. Additionally, participants will form 4-5 person groups which will work through the design process for a randomized evaluation of a development project they chose. Groups will be aided in this project by both the faculty and teaching assistants with the work culminating in presentations at the end of the week.

By examining both successful and problematic evaluations, participants will better understand the significance of various specific details of randomized evaluations. Furthermore, the program will offer extensive opportunities to apply these ideas, ensuring that participants will leave with the knowledge, experience, and confidence necessary to conduct their own randomized evaluations.


Two lectures are given during each of the first four days of the program. The fifth day is devoted to group presentations.


1 What is evaluation?

  • Needs assessment
  • Process evaluation
  • Impact evaluation
  • Cost-benefit analysis

2 Why randomize?

  • Different types of impact evaluation
  • Counterfactual
  • Selection bias
  • Internal / external validity

Get out the Vote: Do phone calls to encourage voting work? Why Randomize?

1 - Sampling Distribution with randomization (Web)

2 - Comparing different evaluation methods (MS Excel®)

3 How to Randomize I

  • Methods of randomization: Lottery, Phase in, Rotation, encouragement
  • Multiple treatments
  • Gathering support

Remedial Education in India: Evaluating the Balsakhi program; Incorporating random assignment into the program

4 How to Randomize II

  • Unit of randomization
  • Cross-cutting treatments
  • Stratification
  • Mechanics


3 - The mechanics of simple random assignment using MS Excel®

5 Measurement and Outcomes

  • Key hypotheses
  • Primary and intermediate outcomes
  • Interpreting multiple outcomes
  • Theory of change Model
  • Questionnaire design
  • Data collection/entry

Women as Policy Makers: Measuring the effects of political reservations; Thinking about measurement and outcomes

6 Sample Size and Power Calculations

  • Estimation
  • Hypothesis testing
  • Power: significance level, variance of outcome, effect size
  • Clustered design


4 - Determining sample size, given budget constraints (OD software and MS Excel®)

7 Managing threats to evaluation and data analysis

  • Attrition
  • Externalities (spillovers)
  • Partial compliance and selection bias

Deworming in Kenya: Managing threats to experimental integrity

8 Analyzing Data

  • Intention to treat (ITT) and Treatment on treated (ToT)
  • Choice of outcomes and covariates
  • External validity
  • Cost-effectiveness


Course Info

Learning Resource Types

notes Lecture Notes
theaters Lecture Videos
assignment Activity Assignments