Why the 2021 Nobel Prize in Economics is so Important
The 2021 Nobel Prize in Economics was given to David Card, Joshua Angrist, and Guido Imbens for their work in using observational data and natural experiments to help us understand causal relationships in economics. This was one of the Nobel Prizes that had been expected for a long time, especially by people like myself who use the tools created by these researchers to address an array of questions in development and labor economics. I wanted to write a short piece for the lay person to explain why this prize is so important and why it helps us to answer so many practical questions.
In many ways this year’s Nobel is a complement to the 2019 Nobel Prize given to Esther Duflo, Abhijit Banerjee, and Michael Kremer for their work in promoting the use of randomized controlled trials in economics to study the impacts of poverty programs and policies. There are essentially two types of data that empirical economists work with today: experimental and observational data. The 2019 prize was given for the former, the 2021 prize for the latter.
In the case of experimental data, most of the empirical sweat goes into the design and logistics of carrying out an experiment. Because the treatment variable is “controlled,” a researcher avoids the nasty problem of selection into a program being statistically confounded with people’s outcomes in the data. This makes the econometrics used for experimental data more straightforward because the data satisfy more of the assumptions needed for ordinary least squares estimation, the basic workhorse of econometrics.
The econometrics used with observational data is much trickier. It is trickier because social behavior is notoriously messy, complicated by countless influences on the data that are outside a researcher’s control. People are complicated objects of study. And as a result, the tricky nature of observational data requires empirical researchers to develop some tricks of their own. These involve ways to tease out causality from non-controlled data.
One of David Card’s most important contributions was his work using “difference-in-differences” to estimate causal effects with observational data. There can exist tremendous biases in comparing a “treated” group (such as program or policy beneficiaries) with an “untreated” group. The treated group might have characteristics that would make it better off or worse off than others who didn’t choose the treatment, even if the treatment didn’t exist. Similarly, there is a tendency to compare outcomes of people “before and after” they participate in a program. But this can likewise lead to bias. For example, maybe a worker who makes the effort to enter a retraining program may have been better off later on simply due to this underlying motivation to make use of any available opportunities, not just that particular program. Card’s most famous pieces of research showcasing the “differences-in-differences” approach actually combine “treated vs. untreated” and “before and after” to examine critical labor issues such as the effect of the minimum wage and an influx of immigration on local unemployment. (Spoiler alert: they don’t affect unemployment significantly.)
Joshua Angrist and Guido Imbens shared the other half of the Nobel for their work using natural experiments and observational data, work that promoted the use of matching models, instrumental variables, regression discontinuity designs. Using the latter, one of Angrist’s most famous papers looks at the effect of smaller class sizes on educational outcomes. The twelfth century rabbinic scholar Maimonides proposed a maximum class size of 40 such that in Israel, when enrollment in a class exceeded 40 students, classes are split in half to create two smaller classes. Angrist used this natural discontinuity to show the impact of smaller class sizes on elementary school kids’ test scores. After all, why should the test scores differ otherwise between children who happened to be initially placed in a class size of 40 versus a (subsequently split) class of 41?
Guido Imbens was a surprise winner in some ways because he still seems to be at the peak of his publishing career with some of his most notable work coming in only the last few years. (Historically Nobel Prizes in economics have been given for work that decades of hindsight and wine-barrel-like aging have found to be Nobel-worthy.) Imbens work spans the range of econometrics, establishing the conditions under which we should trust different methods using observational data to give us accurate results and when we should suspect bias. In the last several years, he and his equally capable research partner (and spouse), Susan Athey, have produced a number of papers showing how machine-learning techniques can be used with experimental data to not only show whether average effects of an intervention are statistically significant, but what type of participants receive the largest and smallest benefits.
This second half of the semester I will have the pleasure of teaching some of the techniques of the Nobel winners in our masters applied econometrics course. In this sense, the timing of the prize couldn’t have been better for our class.
Follow Bruce Wydick at AcrossTwoWorlds.net and on Twitter @BruceWydick.
- Why I Cannot (Fully) Embrace Effective Altruism
- Blog Post: My Experience with the Alcatraz Swim