Elster on the Social Sciences

October 20th, 2009  |  Published in Social Science, Sociology, Statistics

The present crisis looks as though it may bring about a long-delayed moment of reckoning in the field of economics. Macro-economics has been plunged into turmoil now that many of its leading practitioners stand exposed as dogmatists blithely clinging to absurd pre-Keynesian notions about the impossibility of economic stimulus and the inherent rationality of markets, who have nothing at all to say about the roots of the current turmoil. Micro-economics, meanwhile, has seen Freakonomics run its course, as long-standing criticisms of the obsession with “clean identification” over meaningful questions spill over into a new row over climate-change denialism.

Joining the pile-on, Jon Elster has an article in the electronic journal Capitalism and Society on the “excessive ambitions” of the social sciences. Focusing on economics–but referring to related fields–he criticizes three main lines of inquiry: rational choice theory, behavioral economics, and statistical inference.

Although I agree with most of the article’s arguments, much of it seemed rather under-argued. At various points, Elster’s argument seems to be: “I don’t need to provide an example of this; isn’t it obvious?” And with respect to his claim that “much work in economics and political science is devoid of empirical, aesthetic, or mathematical interest, which means that it has no value at all“, I’m inclined to agree. But it’s hard for me to say that Elster is contributing a whole lot to the discussion. I’m also a bit skeptical of the claim that behavioral economics has “predictive but not prescriptive implications”, given the efforts of people like Cass Sunstein to implement “libertarian paternalist” policies based on an understanding of some of the irrationalities studied in behavioral research.

But the part of the essay closest to my own interests was on data analysis. Here Elster is wading into the well-travelled terrain of complaining about poorly reasoned statistical analysis. He himself admits to being inexpert in these matters, and so relies on others, especially David Freedman. But he still sees fit to proclaim that we are awash in work that is both methodogically suspect and insufficiently engaged with its empirical substance.

The criticisms raised are all familiar. The specter of “data snooping . . . curve-fitting . . . arbitrariness in the measurement of . . . variables”, and so on, all fit under the rubric of what Freedman called “data driven model selection”. And indeed these things are all problems. But much of Elster’s discussion suffers from his lack of familiarity with the debates. He refers repeatedly to the problem of statistical significance testing–both the confusion of statistical and substantive significance, and the arbitrariness of the traditional 5% threshold for detecting effects. While I wouldn’t deny that these abuses persist, I think that years of relentless polemics on this issue from people like Deirdre McCloskey and Jacob Cohen have had an impact, and practice has begun to shift in a more productive direction.

Elster never really moves beyond these technical details to grapple with the larger philosphical issues that arise in applied statistics. For example, all of the problems with statistical significance arise from an over-reliance on the null hypothesis testing model of inference–even though as Andrew Gelman says, the true value of a parameter is never zero in any real social science situation. Simply by moving in the direction of estimating the magnitude of effects and their confidence intervals, we can avoid many of these problems.

And although Freedman makes a number of very important criticisms of standard practice, the article that Elster relies upon relies very heavily on the weakness of the causal claims made about regression models. As a superior model, Freedman invokes John Snow’s analysis of cholera in the 1850’s, which used simple methods but relied upon identifying a natural experiment in which different houses received their water from different sources. In this respect, the article is redolent of the time it was published (1991), when the obsession with clean identification and natural experiments was still gaining steam, and valid causal inference seemed like the most important goal of social science.

Yet we now see the limitations of that research agenda. It’s rare and fortuitous to find a situation like Snow’s cholera study, in which a vitally important question is illuminated by a clean natural experiment. All too often, the search for identification leads researchers to study obscure topics of little general relevance, thereby gaining internal validity (verifiable causality in a given data set) at the expense external validity (applicability to broader social situations). This is what has led to the stagnation of Freakonomics-style research. What we have to accept, I think, is that it is often impossible to find an analytical strategy which is both free of strong assumptions about causality and applicable beyond a narrow and artificial situation. The goal of causal inference, that is, is a noble but often futile pursuit. In place of causal inference, what we must often do instead is causal interpretation, in which essentially descriptive tools (such as regression) are interpreted causally based on prior knowledge, logical argument and empirical tests that persuasively refute alternative explanations.**

This is, I think, consistent with the role Elster proposes for data analysis, in the closing of his essay: an enterprize which “turns on substantive causal knowledge of the field in question together with the imagination to concoct testable implications that can establish ‘novel facts'”. And Elster gives some useful practical suggestions for improving results, such as partitioning data sets, fitting models on only one half, and not looking at the other half of the cases until a model is decided upon. But as with many rants against statistical malpractice, it seems to me that the real sociological issue is being sidestepped, which is that the institutional structure of social science strongly incentivizes malpractice. To put it another way, the purpose of academic social science is not, in general, to produce valid inference about the world; it is to produce publications. As long as that is the case, it seems unlikely that bad practices can be definitively stamped out.

**Addendum: Fabio Rojas says what I wanted to say, rather more concisely. He notes that “identification is a luxury when you have an abundance of data and a pretty clear idea about what casual effects you care about.”. Causal inference where possible, causal interpretation where necessary, ought to be the guiding principle. Via the Social Science Statistics blog, there is also a very interesting paper by Angus Deaton on the problems of causal inference. Of particular note is the difficulty of testing the assumptions behind instrumental variables methods, and the often-elided distinction between an instrument that is external to the process under investigation (that is, not caused by the system being studied) and one that is truly exogenous (that is, uncorrelated with the error term in the regression of the outcome on the predictor of interest.)

Leave a Response