What Works? Program Evaluation Techniques for Policy Managers
The policy process is increasingly data-driven, but it is often challenging to make sense of competing estimates of programs’ impacts. This two-day course will help you to understand where these estimates come from and which ones are the most credible. You will develop a more sophisticated understanding of the critical distinction between causation and correlation, you will learn about techniques that can provide plausible estimates of policies’ true impacts, and you will learn why these techniques often work…and why they sometimes don’t.
You need no prior exposure to these concepts to take the course. We will be following a “ground-up” approach, under the assumption that you are learning the material for the first time.
This course will be delivered virtually in two-hour live sessions over the course of five days.
Faculty: Adam Thomas
Course Goals
By the end of the class, students will be able to:
- Distinguish between correlational and causal evidence of programs’ impacts;
- Understand how to evaluate evidence produced by a range of analytical techniques, including randomized trials, simulation modeling,instrumental variables analysis, statistical matching, difference-in-differences, and regression discontinuity; and
- Be able to explain in clear, jargon-free language how these techniques work and when they are (or are not) likely to prove useful as guides for policy formulation.