Wednesday, April 24, 2013

The Reinhart-Rogoff Paper And Steven Colbert


If you are one of those people who think economics is about the most boring thing since spreadsheets, watch last night's Colbert show from 3:23 to 15:16.  That segment is all about how a UMass economics graduate student, Thomas Herndon, tried to replicate a very influential paper by Carmen M. Reinhart and Kenneth S. Rogoff.  That paper has been used as one of the launching pads for the current austerity policies.

Herndon was assigned the task of replicating the Reinhart-Rogoff paper in his econometrics class.  He tried and could not reproduce the original results.  Some background:

From the beginning there have been complaints that Reinhart and Rogoff weren't releasing the data for their results (e.g. Dean Baker). I knew of several people trying to replicate the results who were bumping into walls left and right - it couldn't be done. In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadhseet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.
They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don't get their controversial result.
The Reinhart-Rogoff paper wasn't peer reviewed because it appeared in the Papers and Proceedings section of the American Economic Review (AER).  Ordinary papers in the AER are peer reviewed but not the Papers and Proceedings ones.

But a peer review would not have caught those spreadsheet mistakes, given that the "peers" rarely (never?) review studies by replicating all the calculations.  The reasons for that are many:  In many cases such replication would be a giant amount of work,  peer reviews are unpaid, and, until quite recently, the original data was rarely made available by the researchers.

What are needed are more replications of studies, and not only in economics but in all sciences and social sciences.  The snag is that replication is time-consuming and academics have few incentives to spend time on repeating already existing findings, given that neither promotions nor tenure are likely to drop into the laps of replicators (unless they happen to disprove famous findings). 

But at a minimum, data used in such studies should be made available on the Internet.

This is not because I  think that researchers  do sloppy work or carefully stitch bias into their calculations and observations and so on, although that, too, probably happens.  It's because the incentives we provide for research will be improved if it is understood that any particular study can be subjected to scrutiny and replication.

While I'm writing about this topic, I also want to make a plea for assigning more value to studies which do not find anything startlingly different or new.  Indeed, finding that, say, a new treatment in medicine is no better than the old treatment is valuable information.  Similarly, finding that one's pet theory is rejected is important to publish, however painful that might be.  The file drawer effect is bad for real scientific advances.