A Note on Methodological Discussions in Published Articles
…in a review of 34 empirical articles that employ IV estimation from 2004-2009 in the American Political Science Review and the American Journal of Political Science, only two (6%) mention that the causal effect being estimated is the LATE.
This is a specific instance of a general phenomenon which is common among methodological papers: the author(s) will note that in a review of the literature, some Very Small Number of articles will have mentioned some Very Important Thing. The author(s) make this point in order to make the case that this Very Important Thing is misunderstood or unappreciated by the discipline in general.
This is speaking directly to people like me. I’m not one of the authors who employed IV in the sample to which Aronow and Carnegie are referring, but I could have been: on two separate occasions I have been told by reviewers to “remove the discussion of the local average treatment effect” from a manuscript under review. One reviewer did not seem to understand what the LATE is. The other wrote something along the lines of “everyone knows what the LATE is, so get on with it.”
Despite the rhetorical power of the sort of claim of that Aronow and Carnegie are making, failure to discuss an important methodological point does not necessarily reflect the author’s failure to understand it. It could also reflect an obstreperous reviewer or editor—and these people are gatekeepers, so we have strong incentives to implement whatever they recommend. Or it could, in principle, reflect the belief that that important point is so broadly understood as not to require explicit elaboration—like, do we cite Zellner 1962 in every paper that uses seemingly unrelated regressions?
It is probably true that in this specific example, Aronow and Carnegie are correct that most political scientists aren’t really thinking about what specific quantity the LATE represents, a claim I have made around here once or twice before. But, we ought to be careful about what a review of published studies can tell us about what their authors think, and by implication, what a discipline believes. Running manuscripts through the wringer of peer review very often produces articles that look very different from what their authors had originally intended.