Labor’s Share in Cross-National Perspective

October 21st, 2011  |  Published in Political Economy, Politics, Social Science, Sociology

Peter Orszag has a column about the diminishing share of labor in national income, relative to capital. Mike Konczal provides some useful additional discussion. Both of them frame the issue as a new empirical mystery, because it contradicts a “stylized fact” that economists have long assumed about capitalist economies: that the relative share of labor and capital in national income remains constant over time.

I try to avoid the characteristic sociologist’s vice of economics-bashing, but this does rather strike me as a case where economists are betraying their insularity by purporting to discover something that other social scientists are already talking about. Mike quotes (the generally excellent) Arjun Jayadev musing that “A more comprehensive account should really take a look at the politics of this shift and there is some evidence for the contention that an eroded bargaining power of labor is an important factor.” As it turns out, someone has looked at “the politics”, although they’re not an economist. Last year, the American Sociological Review published a paper called “Good Times, Bad Times: Postwar Labor’s Share of National Income in Capitalist Democracies” by Tali Kristal of the University of Haifa, which bears directly on this issue. (An un-gated version is here.)

One of the tricky things about explaining long-term economic trends is that we don’t have access to the counterfactual: what would the U.S. economy look like if, say, we still had 1950′s levels of unionization? As a next-best solution, though, we can contextualize the United States by comparing it to other rich countries. The global economy is characterized, as Trotsky put it, by “combined and uneven development”: while the declining share of labor income is a cross-national phenomenon, it has not been experienced or responded to in exactly the same way everywhere. Kristal’s paper compares the U.S. to 15 other countries in the period since 1960, in an attempt to identify some of the factors behind labor’s declining income share.

Even if you don’t want to wade through the text, I highly recommend giving it at least a “quant-jock read”–that is, have a look through the charts and tables. I’ll try to summarize the main argument and findings of the paper here. Kristal shows that Labor’s share of income has risen and fallen over the past century, “stylized facts” notwithstanding. In the United States and and the UK, much of the increase in labor’s share took place before and immediately after World War II; there was a substantial postwar increase in continental Europe, the Nordic countries, Australia, and Japan. Since 1980, labor’s share has generally declined everywhere. But the scope and timing of this decline differs across countries, indicating that the relative position of capital and labor is related to the economic and political particularities of the countries.

The assumption that labor’s share of income is constant implies that gains in national income due to increasing productivity are always shared equally by labor and capital. Kristal shows that this is not the case: in the 1960′s and 1970′s, labor income grew as fast as or even faster than productivity, whereas since 1980 labor income has lagged far behind. In other words, this pattern is valid cross-nationally:


Kristal makes the interesting point that this dynamic isn’t necessarily related to the much more studied phenomenon of rising income inequality. Income inequality could increase if one group of workers captured most of the wage gains, which would keep the overall labor share of income constant. And as Kristal wryly notes, studies of income inequality “tend to identify the capitalist class as a subset of the self-employed.”

In attempting to explain the changing fortunes of labor, economists are generally inclined to reach for explanations rooted in the market rather than the political sphere. Thus, as Kristal explains, the two leading explanations for the declining labor share of income have been technology (i.e., the adoption of labor-saving production techniques) and worker bargaining power (declining unionization, competition from abroad). But workers can alter their share of income by political means that go beyond the immediate power of unions in wage bargaining. When social democratic parties are in power, they can shift income shares by shielding workers from market forces, expanding public employment, and regulating the workplace, as well as by taking steps to strengthen labor unions.

Kristal attempts to capture these dynamics with a regression-based analysis, in which labor’s income share in a given year is predicted based on both economic and political variables. Changes in productivity, inflation, unemployment, union power, the strength of left parties in government, and several measures of economic globalization are all combined in the model. To quote from Kristal’s conclusion:

Labor’s share of national income increased in the 1960s and 1970s due to unions organizing new members, the surge in strike activity, and the consolidation of the welfare state. These factors all increased labor’s compensation faster than the economy’s income. Labor’s share declined since the early 1980s with the decline in unionization rates and levels of strike activity, stagnation in government civilian spending, and bargaining decentralization. Labor’s capacity to influence state policies has also declined across countries, and governments’ targets of full employment have been abandoned in favor of labor market flexibility and low inflation. The current decline in labor’s share of the national income can also be traced to an increase in imports from developing countries and the increased presence of foreign affiliates of multinational firms.

Technology, meanwhile, looks like it is not an independent source of labor’s diminishing share. That is, while increasing productivity is associated with a lower labor share of income, this association has always been present, even in the earlier periods when productivity growth and income growth matched up in the aggregate. What has changed is the countervailing political factors that used to ensure that a share of economic growth was paid out to workers.

You can question some of the particulars of the modelling that leads to this conclusion, and in general it’s hard to disentangle the causal relations in this kind of bird’s eye view quantitative analysis. But as an overall correlational picture of what’s happened to labor in the past 50 years, I think there’s a lot in this analysis that people can learn from–maybe even economists.

Translating English into English

January 4th, 2011  |  Published in Art and Literature, Sociology

So it seems there’s going to be a censored version of The Adventures of Huckleberry Finn that replaces the word “nigger” with “slave”. My initial reaction was to agree with Doug Mataconis that this is both offensive and stupid. It struck me as being of a piece with the general tendency of white Americans to deal with the existence of racism by ignoring it rather than talking about it.

And I guess I still feel that way, but after reading Kevin Drum’s take I’m more sympathetic to Alan Gribben, the Twain scholar responsible for the new censored version. Gribben says that because of the extreme visceral reactions people have to the word “nigger”, most teachers today feel they can’t get away with assigning Huck Finn to their students, even if they’d really like to. So the choice was to either consent to this bowdlerization or else let the book gradually disappear from our culture altogether. I’m still a bit torn about it–and I think that the predicament of the teachers Gribben talked to is indicative of precisely the cowardly attitudes toward race that I described above. But I’m willing to accept that censoring the book was the least-bad response to this unfortunate state of affairs.

However, what most caught my attention about Kevin Drum’s post on the controversy was this:

In fact, given the difference in the level of offensiveness of the word nigger in 2010 vs. 1884, it’s entirely possible that in 2010 the bowdlerized version more closely resembles the intended emotional impact of the book than the original version does. Twain may have meant to shock, but I don’t think he ever intended for the word to completely swamp the reader’s emotional reaction to the book. Today, though, that’s exactly what it does.

That got me thinking a more general thought I’ve often had about our relationship to old writings: it’s a shame that we neglect to re-translate older works into English merely because they were originally written in English. Languages change, and our reactions to words and formulations change. This is obvious when you read something like Chaucer, but it’s true to a more subtle degree of more recent writings. There is a pretty good chance that something written in the 19th century won’t mean the same thing to us that it meant to its contemporary readers. Thus it would make sense to re-translate Huckleberry Finn into modern language, in the same way we periodically get new translations of Homer or Dante or Thomas Mann. This is a point that applies equally well to non-fiction and social theory: in some ways, English-speaking sociologists are lucky that our canonical trio of classical theorists–Marx, Weber, and Durkheim–all wrote in another language. The most recent translation of Capital is eminently more readable than the older ones–and I know I could have used a modern English translation of Talcott Parsons when I was studying contemporary theory.

Now, one might respond to this by saying that writing loses much in translation, and that some things just aren’t the same unless you read them in the original un-translated form. And that’s probably true. But it would still be good to establish the “English-to-English translation” as a legitimate category, since it would give us a better way of understanding things like the new altered version of Huck Finn. You would have the original Huck and the “new English translation” of Huck existing side by side; students would read the translation in high school, but perhaps they would be introduced to the original in college. We could debate whether a new translation was good or bad without getting into fruitless arguments over whether one should ever alter a classic book. And maybe it would help us all develop a more historical and contextual understanding of language and be less susceptible to the arbitrary domination of prescriptive grammarians.

Elster on the Social Sciences

October 20th, 2009  |  Published in Social Science, Sociology, Statistics

The present crisis looks as though it may bring about a long-delayed moment of reckoning in the field of economics. Macro-economics has been plunged into turmoil now that many of its leading practitioners stand exposed as dogmatists blithely clinging to absurd pre-Keynesian notions about the impossibility of economic stimulus and the inherent rationality of markets, who have nothing at all to say about the roots of the current turmoil. Micro-economics, meanwhile, has seen Freakonomics run its course, as long-standing criticisms of the obsession with “clean identification” over meaningful questions spill over into a new row over climate-change denialism.

Joining the pile-on, Jon Elster has an article in the electronic journal Capitalism and Society on the “excessive ambitions” of the social sciences. Focusing on economics–but referring to related fields–he criticizes three main lines of inquiry: rational choice theory, behavioral economics, and statistical inference.

Although I agree with most of the article’s arguments, much of it seemed rather under-argued. At various points, Elster’s argument seems to be: “I don’t need to provide an example of this; isn’t it obvious?” And with respect to his claim that “much work in economics and political science is devoid of empirical, aesthetic, or mathematical interest, which means that it has no value at all“, I’m inclined to agree. But it’s hard for me to say that Elster is contributing a whole lot to the discussion. I’m also a bit skeptical of the claim that behavioral economics has “predictive but not prescriptive implications”, given the efforts of people like Cass Sunstein to implement “libertarian paternalist” policies based on an understanding of some of the irrationalities studied in behavioral research.

But the part of the essay closest to my own interests was on data analysis. Here Elster is wading into the well-travelled terrain of complaining about poorly reasoned statistical analysis. He himself admits to being inexpert in these matters, and so relies on others, especially David Freedman. But he still sees fit to proclaim that we are awash in work that is both methodogically suspect and insufficiently engaged with its empirical substance.

The criticisms raised are all familiar. The specter of “data snooping . . . curve-fitting . . . arbitrariness in the measurement of . . . variables”, and so on, all fit under the rubric of what Freedman called “data driven model selection”. And indeed these things are all problems. But much of Elster’s discussion suffers from his lack of familiarity with the debates. He refers repeatedly to the problem of statistical significance testing–both the confusion of statistical and substantive significance, and the arbitrariness of the traditional 5% threshold for detecting effects. While I wouldn’t deny that these abuses persist, I think that years of relentless polemics on this issue from people like Deirdre McCloskey and Jacob Cohen have had an impact, and practice has begun to shift in a more productive direction.

Elster never really moves beyond these technical details to grapple with the larger philosphical issues that arise in applied statistics. For example, all of the problems with statistical significance arise from an over-reliance on the null hypothesis testing model of inference–even though as Andrew Gelman says, the true value of a parameter is never zero in any real social science situation. Simply by moving in the direction of estimating the magnitude of effects and their confidence intervals, we can avoid many of these problems.

And although Freedman makes a number of very important criticisms of standard practice, the article that Elster relies upon relies very heavily on the weakness of the causal claims made about regression models. As a superior model, Freedman invokes John Snow’s analysis of cholera in the 1850′s, which used simple methods but relied upon identifying a natural experiment in which different houses received their water from different sources. In this respect, the article is redolent of the time it was published (1991), when the obsession with clean identification and natural experiments was still gaining steam, and valid causal inference seemed like the most important goal of social science.

Yet we now see the limitations of that research agenda. It’s rare and fortuitous to find a situation like Snow’s cholera study, in which a vitally important question is illuminated by a clean natural experiment. All too often, the search for identification leads researchers to study obscure topics of little general relevance, thereby gaining internal validity (verifiable causality in a given data set) at the expense external validity (applicability to broader social situations). This is what has led to the stagnation of Freakonomics-style research. What we have to accept, I think, is that it is often impossible to find an analytical strategy which is both free of strong assumptions about causality and applicable beyond a narrow and artificial situation. The goal of causal inference, that is, is a noble but often futile pursuit. In place of causal inference, what we must often do instead is causal interpretation, in which essentially descriptive tools (such as regression) are interpreted causally based on prior knowledge, logical argument and empirical tests that persuasively refute alternative explanations.**

This is, I think, consistent with the role Elster proposes for data analysis, in the closing of his essay: an enterprize which “turns on substantive causal knowledge of the field in question together with the imagination to concoct testable implications that can establish ‘novel facts’”. And Elster gives some useful practical suggestions for improving results, such as partitioning data sets, fitting models on only one half, and not looking at the other half of the cases until a model is decided upon. But as with many rants against statistical malpractice, it seems to me that the real sociological issue is being sidestepped, which is that the institutional structure of social science strongly incentivizes malpractice. To put it another way, the purpose of academic social science is not, in general, to produce valid inference about the world; it is to produce publications. As long as that is the case, it seems unlikely that bad practices can be definitively stamped out.

**Addendum: Fabio Rojas says what I wanted to say, rather more concisely. He notes that “identification is a luxury when you have an abundance of data and a pretty clear idea about what casual effects you care about.”. Causal inference where possible, causal interpretation where necessary, ought to be the guiding principle. Via the Social Science Statistics blog, there is also a very interesting paper by Angus Deaton on the problems of causal inference. Of particular note is the difficulty of testing the assumptions behind instrumental variables methods, and the often-elided distinction between an instrument that is external to the process under investigation (that is, not caused by the system being studied) and one that is truly exogenous (that is, uncorrelated with the error term in the regression of the outcome on the predictor of interest.)

The Game Beyond the Game

September 10th, 2009  |  Published in Art and Literature, Cities, Politics, Social Science, Sociology

The new issue of City and Community has an article by Peter Dreier and John Atlas about a show that captivates many an urban sociologist, The Wire. Their piece extends comments they made last year in Dissent, in a symposium about the show. In both pieces, they repeat the common accusation that the show is nihilistic, because it presents urban problems but doesn’t show any solutions to them. To bolster the point, they dredge up a quotation from an interview, in which Simon proclaims that meaningful change is impossible “within the current political structure”.

As a corrective to what they see as The Wire‘s shortcomings, Dreier and Atlas catalogue some of the real community activists who have struggled against injustice in Baltimore, and won some small victories. And these are indeed inspiring and courageous people, who have managed to win some real improvements in people’s lives. But by bringing them up and presenting them as the solution to all the problems The Wire portrays, I think Dreier and Atlas miss the point of what David Simon and Ed Burns are doing with the show.

It’s misleading to say that The Wire is nihilistic. It’s true that the problems it portrays appear, within the context of the narrative, to be insoluble. And it may even seem, initially, as though the show is sympathetic to a conservative position: the poor will always be with us, government intervention always makes things worse, so we might as well just give up and try to make things better in our own small, individualist way. But this would be a profound misreading, because the show suggests, not that there are no solutions, but something far more complex. We come to understand, as the seasons unfold, that each of the dysfunctional institutions we see is embedded in a larger system that goes far beyond the scale of Baltimore. There is, as Stringer Bell puts it in season 3, “a game beyond the game”. We therefore have to conclude, not that there are no solutions, but that there may be no solutions at the scale of a single city.

The police find themselves hamstrung by their need to deal with national agencies like the FBI, which has been caught up in the mania of the “war on terror”. The dockworkers find their way of life destroyed by the automation and the transformation of the global shipping industry. The mayor is at the mercy of Maryland state politics because he needs funding. The local newspaper struggles, and fails, to adjust to a world of profit-driven news and competition from new media. Even the drug dealers are at the mercy of their out-of-town “connect”.

None of this implies that Baltimore’s doom is inevitable. Neither imperialism, nor neoliberalism, nor Republican domination of state politics, nor the tabloidization of all journalism are inevitable. If they seem that way on the show, it is because of the careful and clever way in which the story is framed: these larger-scale institutions, the ones where the real agency lies, are always kept off screen and held beyond the reach of the characters. Thus the world the characters inhabit appears to them to be one where nothing can be changed. That doesn’t mean that the world of the show, that we viewers can sense, is actually so tragic.

But is true that none of these problems can be solved in a single city, and most of them require a long-term, and fairly radical project of social transformation. This may present difficulties for liberals who would prefer that social problems have incremental, non-threatening solutions. But by presenting small-scale local activism as an adequate response, Dreier and Atlas do a disservice both to the problems they address, and to the activists themselves.

Perhaps, however, their real political objective is somewhat different from simply promoting the importance of urban collective action. The giveaway comes at the end of the City and Community version of their essay:

Perhaps, a year or two from now, Simon or another writer will propose a new series to TV networks about the inner workings of the White House and an idealistic young president, a former community organizer, who uses his bully pulpit to mobilize the American people around their better instincts. This president would challenge the influence of big business and its political allies, to build a movement, a New Deal for the 21st century, to revitalize an economy brought to its knees by Wall Street greed, address the nation’s health care and environmental problems, provide adequate funding for inner-city schools, reduce poverty and homelessness, and strengthen the power of unions and community groups.

A show like that would certainly be a nice bit of wish-fulfilment for liberals who like to imagine a “great man” riding in and fulfilling all their fantasies. But it’s unclear what has to do with our world, in which an ambitious young politician used his charisma and the wishful thinking of his base to ride to power, and then proceeded to cater to the needs of bankers and insurance companies while sinking America ever deeper into an intractable war in Afghanistan. Faced with that reality, the world of The Wire doesn’t look so nihilistic or unrealistic after all.

The theory of theory

May 9th, 2008  |  Published in Social Science, Sociology

Teppo Felin has a post over at OrgTheory that quotes Homans’ advice on theory-building. Thinking about where I agree or disagree with these strictures helped me see some of the ways I differ from much of mainstream social science. To take his points in order:

Look first at the obvious, the familiar, the common. In a science that has not established its foundations, these are the things that best repay study.

That one I agree with wholeheartedly. I guess it’s something everyone from Henri Lefebvre to the Freakonomics guys would concur on. Hannah Arendt wouldn’t like it, though.

State the obvious in its full generality. Science is an economy of thought only if its hypotheses sum up in a simple form a large number of facts.

This I’m much more ambivalent about. Often, attempts to theorize at maximum generality lead to theories that are false or vacuous. Just as important as generality is understanding the context in which a theory does or does not apply.

Talk about one thing at a time. That is, in choosing your words (or, more pedantically, concepts) see that they refer not to several classes of fact at the same time but to one and only one. Corollary: Once you have chosen your words, always use the same words when referring to the same things.

On the face of it, this seems like it should be uncontroversial. But I think it reflects a naive belief that scientific and literary language can easily be separated. I often find that when I’m writing up a sociological argument, I want use different words and different constructions for the same concept, in order to make the tone seem less clunky and flat. And I think this is more than a matter of stylistics. Freshman composition to the contrary, language is not a window onto your thoughts. It is a social fact, and it is full of ambiguities and misunderstandings. In order to really get a new idea across, it is often necessary to restate it and rephrase it in many different ways, circling around your concept in order to triangulate your position in a way that is intelligible to others. If you just use one word, referring to one thing, you are at the mercy of whatever connotations and resonances that word will have for your audience. And that leaves you open to all kinds of misinterpretation.

Cut down as far as you dare the number of things you are talking about. “As few as you may; as many as you must,” is the rule governing the number of classes of fact you take into account.

This one is the flip side of the maximum-generality rule, and I object to it for similar reasons. It’s implicitly anti-dialectical, since it implies that the way to understand social phenomena is to break them down into little pieces and separate them from their context, rather than fitting them into a totality.

Once you have started to talk, do not stop until you are finished. That is, describe systematically the relationships between the facts designated by your words.

That’s a good one, and it’s advice I should be better at following. When I have a good idea, I sometimes have a hard time cashing it out before I get sick of it and abandon it.

Recognize that your analysis must be abstract, because it deals with only a few elements of the concrete situation. Admit the dangers of abstraction, especially when action is required, but do not be afraid of abstraction.

That’s a good one too, but it all depends on what you mean by abstraction. The commodity form is an abstraction I really like. The concept of utility, not so much. For Homans, of course, it would be just the opposite.

Playing Seriously

July 18th, 2007  |  Published in Social Science, Sociology

Wending my way through some posts on OrgTheory, I ran across an interesting post by Omar Lizardo. He summed up something that’s eaten away at me for a while as I attempt to socialize myself into academia:

I propose that one important component of success in science is the ability to not be serious about the “right” things and to be serious about seemingly unimportant things. This ability is not equally distributed: some people seem unable to not be serious about serious things. Other people are almost constitutively incapable of being serious about non-serious things; they are the ones who “don’t get” the scientific game and who think that getting into a (serious) shouting match over whether Simmel’s contributions have been justifiably neglected or whether Marx’s analysis of commodity fetishism is incoherent is the weirdest spectacle on the planet. My sense is that if you are one of those latter people and you are still in grad school, if you are “too cool” to take mere ideas seriously, you probably should be thinking about another day job.

He goes on to relate this to some comments from Bourdieu about “playing seriously”.

I am, assuredly, someone who can be serious about non-serious things, even (or especially) Marx’s analysis of commodity fetishism. Moreover, I enjoy being such a person, I want to be such a person, and I think the capacity to play seriously is one of the highest manifestations of the human spirit. Even in its lowest forms–such as the drunken bar argument over a sports team–I love and cherish the fact that our particular species of ape is one that can invest passion and energy in the inessential. Playing seriously is what we do in the realm of freedom.

My problem is that I feel guilty about this. This comes from my background in activism and socialist politics. As long as the inequalities of a class society persist, it feels like bad faith to be serious about the non-serious when there are plenty of serious things to be serious about. I’ve justified this before by arguing that since I have no talent for organizing, it’s better for me to put my energy into academic work, which I’m good at, and which will hopefully be politically useful at some point. But that just feels like an act of bad faith, a way to legitimate not doing something that should be morally imperative because I don’t feel like it. Maybe it would be better to do anti-war organizing badly than to do academic work well.

And of course, all this hand-wringing keeps me from doing academic work too, and instead causes me to procrastinate by writing posts like this.