Jake Velker, MPA
Are randomized trials the way to finally start making a dent in reducing poverty, after years of hopeful thinking and disappointing results? Does this tool for evidence-based policymaking hold the key for practitioners to determine which poverty reduction programs work and which don’t? These questions motivate two recent books published by researchers who are at the vanguard of the randomized control trials (RCT) movement: More Than Good Intentions by Dean Karlan and Jacob Appel of Innovations for Poverty Action (IPA) and Poor Economics by Abhijit Banerjee and Esther Duflo of the Abdul Latif Jameel Poverty Action Lab (J-PAL). They deliberately introduce randomization in the implementation of anti-poverty measures to provide confidence in the programs’ efficacy (or lack thereof).
The results thus far have been dramatic and not always intuitive: microfinance is less effective than we hoped ; free bed nets are used more often (and prevent more malaria) than those that cost money. IPA and J-PAL are currently involved in dozens of trials and are generally credited with bringing an unprecedented level of rigor to the evaluation of development—a field normally dominated by grand theories and polemics.
Enough praise has been heaped on the “randomistas” that I feel confident I can focus on the criticisms of their methodology without sounding uncharitable.
The first criticism—leveled forcefully by Princeton’s Angus Deaton—is that randomized trials do not help us in any systematic way to gain an understanding of why interventions work. In this sense, IPA and J-PAL are part of a broader trend in economic research that eschews theory in favor of real-world applications and problem-solving. This is irksome for many economists, particularly those who believe that poverty cannot be solved without a broader accounting of the mechanisms that keep people trapped in poverty. Randomized trials are beginning to test theoretical frameworks more directly, but there is still much progress to be made.
The most relevant criticism, however, is political. IPA and J-PAL economists have been accused of ignoring the institutional constraints against which their interventions would inevitably contend if scaled up. Often, research from randomized trials offers a conclusion like “we find that intervention X lowers Y disease transmission by Z percent.” While it is extremely helpful to have confidence in the efficacy of a treatment, glaring questions remain. Does the program work when it is implemented by a weak bureaucracy, rather than university-trained researchers? At scale, who will be responsible for administering the recommended program? What are their incentives to perform? Where will the money come from? If the intervention is so successful, were there good reasons it wasn’t tried before?
A troubling case in point involves one of the most heralded studies from the RCT movement to date. Working in Kenya, economists Michael Kremer and Ted Miguel found that providing de-worming medicine to students boosted school attendance cost-effectively. Spurred by their research, the Kenyan government committed to making de-worming medicine available to more than 3,000,000 of its primary school children in 2009. But the policy was recently discontinued due to a dispute between the Kenyan government and international donors over corruption and the administration of education funding. It should go without saying that for the ultra-poor, these sorts of bureaucratic obstacles are the norm, rather than the exception.
If economics is just supply and demand, the work of IPA and J-PAL has focused thus far mostly on demand. It is difficult to quantitatively study governance—imagine what an RCT studying a poor country’s provincial governance, for example, might look like—and even harder to actually improve the quality of basic services in developing countries. So many of the interventions RCTs have found to be effective involve classic public goods, which by definition remain under-provisioned by private markets. But the bureaucracies of developing countries are generally ineffective, if not downright corrupt. This is where economics loses its relevance and institutions and leadership rear their ugly heads.
These problems have not been amenable to ever-more creative randomized trials. In fact, many of the most celebrated finds of the RCT movement are relative “no-brainers.” Who, after all, would argue against treating poor school children for intestinal worms? Esther Duflo and her colleagues have said that we do not know what works. Many would respond that we know perfectly well what works; but do not know how to do it. Perhaps the real questions start once an intervention has been proven to work.
Randomistas respond to this critique as follows. First, it was never their ambition to overhaul the political economy of the developing world. The fact that they have found real evidence of effective interventions is in itself a major accomplishment. They believe that their approach can improve lives even in discouraging political settings. They are not promising a sweeping social revolution, but rather a “quiet revolution” of incremental gains. And even critics will concede that though the modesty of this approach may be unsatisfying, it is nonetheless an improvement on the empty promises all too frequent in the development world.
 Abhijit Banerjeey, Esther Duflo, Rachel Glennerster, and Cynthia Kinnan, “The miracle of microfinance? Evidence from a randomized evaluation,” Working Paper (unpublished), May 2009.
 Jessica Cohen and Pascaline Dupas, “Free Distribution or Cost-Sharing? Evidence from a Randomized Malaria Prevention Experiment,” Quarterly Journal of Economics, Vol. 125:1, 2010.
 For examples of such praise, see: Ian Parker, “The Poverty Lab: Transforming development economics, one experiment at a time,” New Yorker, May 2010; James Crabtree, “Attested Development,” Financial Times, April 2011; William Easterly, “Measuring How and Why Aid Works – or Doesn’t,” Wall Street Journal, April 2011; Ben Goldacre, “How can you tell if a policy is working? Run a trial,” The Guardian, May 2011; and Nicholas Kristof, “Getting Smart on Aid,” New York Times, May 2011.
 Angus Deaton, “Instruments, Randomization, and Learning about Development,” Journal of Economic Literature, Vol. 48:2, June 2010.
 Edward Miguel and Michael Kremer, “Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities,” Econometrica, Vol. 72: 1, January 2004.
 Justin Sandefur, “Held Hostage: Funding for a Proven Success in Global Development on Hold in Kenya,” Global Development: Views from the Center blog, Center for Global Development, April 2011.