Social Science

Left of the Dial

July 18th, 2017  |  Published in Political Economy, Politics, Socialism, Work

Elizabeth Bruenig has written about the distinction between “liberals” and “the left.” She proposes that everyone in the broad tent of what she calls “non-Republicanism” is actually a liberal, in the following sense:

The second sense in which almost every non-Republican is a liberal is that they all agree with the tenets of liberalism as a philosophy: that is, the worldview that champions radical, rational free inquiry; egalitarianism; individualism; subjective rights; and freedom as primary political ends. (Republicans are, for the most part, liberals in this sense too; libertarians even more so.)

This is an easy statement for me to agree with–but I also think it brushes past some political distinctions that are important.

Am I a partisan of “radical, rational free inquiry”? I suppose I am, in that, like Marx, I endorse a “ruthless criticism of the existing order,” one which “will shrink neither from its own discoveries, nor from conflict with the powers that be.”

Do I believe in “egalitarianism”? Naturally–one of the basic structural features of my book is the distinction between a hierarchical society, like our own, and one where everyone shares in both the benefits and the sacrifices that are possible or necessary given our level of technological development and ecological constraint.

Individualism? Also uncontroversial, although it’s not entirely clear what the term is supposed to mean. I side with Oscar Wilde, who said that “With the abolition of private property, then, we shall have true, beautiful, healthy Individualism.” That instead of the false freedom of those condemned to work for others for a paycheck–free in Marx’s “double sense” of being free to sell our labour power and free of anything else to sell–we can have what Philippe Van Parijs calls “real freedom”, the freedom that comes from having the time and the resources to pursue self-actualization.

As for “subjective rights,” I’m not completely sure what that’s supposed to mean. Rights that are politically stipulated and democratically assigned, I guess, rather than arising from some divine concept of natural law? In that case, again, I’m on board, and I think the “social rights” arguments of people like T.H. Marshall can be usefully synthesized with the politics of opposing oppression and exploitation.

And then, of course, there is freedom. A word lodged deeply in the liberal tradition, and in the American tradition. And one, I think, that should be at the center of socialist politics as well. But freedom from what, and freedom to do what?

Here is Bruenig’s gloss on the meaning of socialism: “the economic aspects of liberalism (free or freeish market capitalism) create material conditions that actually make people less free.”

I like this, yet again I find it vague. In describing my own political trajectory, I often talk about my parents’ liberal politics, and my own journey of discovery, through which I concluded that their liberal ideals couldn’t be achieved by liberal means, but required something more radical, and more Marxist.

But what would it mean to escape “the economic aspects of liberalism”? Would it mean merely high wages; universal health care and education; a right to housing; strong labor unions?

To be clear, I am in favor of all of those things.

But we’ve seen this movie before. It’s the high tide of the welfare state, which is nowadays sometimes held up as an idyllic model of class peace and human contentment: everyone has a good job, and good benefits, and a comfortable retirement. (Although of course, this Eden never existed for much of the working class.) Who could want more?

The historical reality of welfare capitalism’s postwar high tide, though, is that everyone wanted more. Capitalists, as they always do, wanted more profits, and they felt the squeeze from powerful unions and social democratic parties that were impinging on this prerogative. More than that, they faced the problem of a working class that was becoming too politically powerful. This is what Michal Kalecki called the “political aspects of full employment,” the danger that a sufficiently empowered working class might call into question the basic structure of an economy based on concentrated property rights and capital accumulation.

Sometimes socialists will emphasize economic democracy as the core of our politics. Because as the Democratic Socialists of America’s statement of political principles puts it, “In the workplace, capitalism eschews democracy.” According to this line of argument, socialism means taking the liberal ideal of democracy into places where most people experience no democratic control at all, most especially the workplace.

But when you talk about introducing democracy, you’re talking about giving people control over their lives that they didn’t have before. And once you do that, you open up the possibility of much more radical and disruptive kinds of change.

For it is not just capitalists who always want more, but workers too. A good job is better than a bad job, is better than no job. Higher wages are better than low. But a strong working class isn’t inclined to sit back and be content with its lot–it’s inclined to demand more. Or less, when it comes to the drudgery of most jobs. After all, how many people dream of punching clocks and cashing paychecks at the behest of a boss, no matter what the size of the check or the security of the job? The song “Take This Job and Shove It” appeared in the aftermath of a period when many workers could make good on that threat, and did. In the peak year, 1969, there had been 766 unauthorized wildcat strikes in the United States, but by 1975 there were only 238.

All of this goes to the point that even if we could get back the postwar welfare state, that simply isn’t a permanently viable end point, and we need a politics that acknowledges that fact and prepares for it. And that has to be connected to some larger vision of what lies beyond the immediate demands of social democracy. That’s what I’d call socialism, or even communism, which for me is the ultimate horizon. The socialist project, for me, is about something more than just immediate demands for more jobs, or higher wages, or universal social programs, or shorter hours. It’s about those things. But it’s also about transcending, and abolishing, much of what we think defines our identities and our way of life.

It is about the abolition of class as such. This means the abolition of capitalist wage labor, and therefore the abolition of “the working class” as an identity and a social phenomenon. Which isn’t the same as the abolition of work in its other senses, as socially necessary or personally fulfilling labor.

It is about the abolition of “race”, that biologically fictitious, and yet socially overpowering idea. A task that is inseparable from the abolition of class, however much contemporary liberals might like to distract us from that reality. As David Roediger details in his recent essay collection on Class, Race, and Marxism, much of the forgotten history of terms like “white privilege” originated with communists, who wrestled with the problem of racism not to avoid class politics but to facilitate it. People like Claudia Jones, or Theodore Allen, whose masterwork The Invention of the White Race, was, as Roediger observes, borne of “a half century of radical organizing, much of it specifically in industry.”

And so too, no socialism worth the name can shrink from questioning patriarchy, gender, heterosexuality, the nuclear family. Marx and Engels themselves had some presentiment of this, some understanding that the control of the means of reproduction and the means of production were intimately and dialectically linked at The Origin of the Family, Private Property and the State. But they could follow their own logic only so far, and so it fell to the likes of Shulamith Firestone to suggest radical alternatives to our current ways of organizing the bearing and raising of children. It took communists the likes of Leslie Feinberg and Sylvia Federici to complicate our simplistic assumptions about the existence of binary “gender.” And the more we win reforms that allow people to define their sexualities and gender identities, to give women control of their bodies, to lessen their economic dependence on men, the more this kind of radical questioning will spill into the open.

So that’s what it means to me to be on “the left.” To imagine and anticipate and fight for a world without bosses, and beyond class, race, and gender as we understand them today. That, to me, is what it means to fight for individualism, and for freedom.

That’s one reason that I make a point of arguing for a politics that fights for beneficial reforms–single payer health care, living wages, all the rest–but that doesn’t stop there. A politics that fights for the “non-reformist” reform: a demand that is not meant to lead to a permanent state of humane capitalism, but that is intentionally destabilizing and disruptive.

The other reason is that, for all the economic and political reasons noted above, we can’t just get to a nicer version of capitalism and then stop there. We can only build social democracy in order to break it.

Is that what every liberal, or even every leftist, believes? From my experience, I don’t think so. That’s not meant to be a defense of sectarianism or dogmatism; I believe in building a broad united front with everyone who wants to make our society more humane, and more equal. But I have my sights on something beyond that.

Because if we do all agree that the project of the left is predicated on a vision of freedom and individualism, then we also have to regard that vision as a radically uncertain one. We can only look a short way into the future–to a point where the working class has had its shackles loosened a bit, as happened in the best moments of 20th Century social democracy. At that moment we again reach the point where a social democratic class compromise becomes untenable, and the system must either fall back into a reactionary form of capitalist retrenchment, or forward into something else entirely. What our future selves do in those circumstances, and what kinds of people we become, is unknowable and unpredictable–and for our politics to be genuinely democratic, it could not be any other way.

A $15 minimum wage is too high and that’s great

April 15th, 2016  |  Published in Political Economy, Politics, Socialism, Time, Work

How high is too high, for the minimum wage?

Dylan Matthews, in his wrap-up of the Democratic primary debate, says that his “off-the-record conversations with left-leaning Democratic economists” indicate that many of them “express grave concern about the $15-an-hour figure, about the danger that this time we might be going too far.” His Vox colleague Timothy Lee is tagged in to make the same argument in another post.

This despite the fact that Hillary Clinton has now apparently joined Bernie Sanders in endorsing the $15 minimum, going back on her previous unwillingness to go above $12.

And you know what? I think they might be right. It might be the case that a $15 an hour minimum wage is, as Matthews put in a tweet, “dangerous”. To which my response is: that’s awesome!

The reason that bourgeois economists tend to think a high minimum wage is “dangerous” is because they think it will lead to reduced employment. This is for two reasons.

First, because if it becomes economically infeasible to hire people at $15 per hour for certain jobs, the employers may just go out of business, reducing the demand for labor. There is a large body of literature suggesting that this objection is overblown, dating back to Card and Krueger in the early 1990’s. But it’s hard to dispute that there is some level at which higher minimum wages will lead to reduced employment.

The second thing that could reduce employment, even if the minimum wage doesn’t force any businesses to go under, is automation. If it costs $15 an hour to pay a burger-flipper at McDonalds, perhaps it will become more appealing to turn to a burger-flipping robot, of the sort offered by Momentum Machines. This is a retort often thrown at living wage advocates by conservative critics: joke’s on you suckers, raise your wage and we’ll just automate your job!

Together, these arguments amount to a radical case for high minimum wages, not against them. Because they both get at the underlying political principle that should motivate any argument for higher wages: people need more money. That’s completely separate from the question of whether things like low-wage fast food jobs should exist at all, which they probably shouldn’t.

In other words, if $15 an hour makes it a little easier for a McDonalds worker to survive, that’s great. But if it leads to some of those jobs disappearing entirely, then that forces us to confront an even bigger and more important question. Namely, how do we separate the idea of providing everyone with a decent standard of living from the idea of getting everyone a “job”? I’ve argued before that job-creation is a hole that we should stop digging.

The fight for 15 should be dangerous. I hope it is! I hope it leads to shorter hours, and a universal basic income. That’s what I’d call some real disruptive innovation.

Work to Need

February 23rd, 2016  |  Published in Socialism, Work

Many of us have found ourselves in jobs where there just wasn’t much work to do. We spent days sitting at desks surfing the Internet, while using innovations like the boss key, in case we needed to show our boss some pretense of being “busy.” This is ultimately a demoralizing and demeaning existence of pseudo-leisure, time which is not our own but is not being used for any purpose.

Anyone who has had that experience no doubt smiled at the story of Spanish civil servant Joaquín Garcia, employee of a municipal water company. When he was considered for an award for 20 years of service, it was discovered that he had not in fact shown up for work in 6 years, while continuing to draw his paycheck.

Garcia insisted that there was simply no work for him to do, and that he had been put in the job in the first place as political retaliation. Other sources contested the original report, claiming that he did show up to work but merely spent his time reading philosophy—becoming an expert on Spinoza, according to Mr. Garcia—which would make him just another case of dreary workplace pseudo-leisure.

But it was the original vision, of a man simply walking away from the pointlessness of his work, that gave the story its viral appeal. It punctured the mystification of “work,” that oppressive abstraction that I’ve tried to break down many times before. Garcia rejected the “work” of dutifully showing up for a job that had no reason to exist, in favor of the self-fulfilling “work” of reading philosophy. What might we all do if we could do the same?

The “work to rule” action is a popular labor tactic, an alternative to going on strike. It involves carefully and literally following every rule in the contract, which in most workplaces has the practical effect of slowing work down to a crawl. But perhaps we need something like the opposite: “work to need.” If everyone with a pointless, wasteful, or destructive job simply refused to show up to it, we would learn a lot about how much of our time is taken up with “work” that has everything to do with our dependence on wage labor, and nothing at all to do with the things we need to run a decent society.

Robot Redux

August 18th, 2015  |  Published in anti-Star Trek, Political Economy, Politics, Time, Work

It never fails that when I get around to writing something, I’m immediately inundated by directly related news, making me think that I should have just waited a few days. The moment I commit bits to web servers about the robot future, I see the following things.

First, the blockbuster New York Times story about Amazon and its corporate culture. The brutality of life among the company’s low-wage warehouse employees was already well covered, but the experience of the white collar Amazonian was less well known. The office staff, it seems, experiences a more psychological form of brutality. I couldn’t have asked for a better demonstration of my point that “the truly dystopian prospect is that the worker herself is treated as if she were a machine, rather than being replaced by one”. To wit:

Company veterans often say the genius of Amazon is the way it drives them to drive themselves. “If you’re a good Amazonian, you become an Amabot,” said one employee, using a term that means you have become at one with the system.

On to number two! Lydia DePillis of the Washington Post reacts to efforts to raise the minimum wage in exactly the way I mentioned in my post: by raising the threat of automation. She notes various advances in technology, while also observing that in recent times “the industry as a whole has largely been resistant to cuts in labor . . . the average number of employees at fast-food restaurants declined by fewer than two people over the past decade”. But, she warns, that could all change if the minimum wage is raised to $15.

Liberal economist (and one-time adviser to the Vice President) Jared Bernstein responds here. He makes, in a slightly different way, the same point I did: “one implication of this argument is that we should make sure to keep wages low enough so employers won’t want to bother swapping out workers for machines . . . a great way to whack productivity growth.” (Not to mention, a great way to make life miserable for the workers in question.) He then goes on to argue that higher wages won’t really lead to decreased employment anyway, which sort of undercuts the point. But oh well.

Finally, we have the Economist weighing in. This little squib on “Automation angst” manages to combine all the bourgeois arguments into one, in a single paragraph:

[Economist David] Autor argues that many jobs still require a mixture of skills, flexibility and judgment; they draw upon “tacit” knowledge that is a very long way from being codified or performed by robots. Moreover, automation is likely to be circumscribed, he argues, as politicians fret about wider social consequences. Most important of all, even if they do destroy as many jobs as pessimists imagine, many other as yet unimagined ones that cannot be done by robots are likely to be created.

So, to summarize. The robots won’t take your job, because they can’t. Or, actually, the robots can take your job but they won’t, because we will make a political decision to disallow it. Or no, never mind, the robots will take your job, but it’s fine because we will create lots of other new jobs for you.

This summarizes the popular approach to this problem well, from a variety of vantage points that all miss the main point. Namely, that if it is possible to reduce the need for human labor, the question becomes: who benefits from that. The owners, of the robots, or the rest of the working masses?

Egyptian Lingerie and the Robot Future

August 6th, 2015  |  Published in anti-Star Trek, Feminism, Political Economy, Politics, Work

The current issue of the New Yorker has a story about the odd phenomenon of Chinese lingerie merchants in Egypt. These immigrant entrepreneurs are apparently ubiquitous throughout the poor, conservative districts of upper Egypt, where they dispense sexy garments to the region’s pious Muslim women. The cultural and geopolitical details of the story are interesting for a number of reasons, but I was struck in particular by a resonance with some debates that have recently flared up again about labor and automation, for reasons I’ll get back to below.

“Robots will take all our jobs” is a hardy perennial of popular political economy. Typical of the latest crop is Derek Thompson of the Atlantic, who wrote an article (in which he quotes me), speculating about a “World Without Work” in the wake of mass adoption of robotization and computerization. Paul Mason gives a more leftist and political rendition of similar themes.

As I note in my recent Jacobin editorial, this kind of thing is not new, and is in fact an anxiety that recurs throughout the history of capitalism. Two decades ago, we had the likes of Jeremy Rifkin and Stanley Aronowitz musing about the “end of work” and the “jobless future”.

And these repeating waves of robo-futurism call into existence the same repeated insistence that robots are not, in fact, taking all the jobs. Doug Henwood was on this beat twenty years ago and remains on it today. Matt Yglesias, likewise, calls fear of automation a “myth”.

One of the specific things that people like Henwood and Yglesias always cite is the productivity statistics. If we were seeing a wave of unprecedented automation, then we should be seeing rapid rises in measured labor productivity—that is, the amount of output that can be produced per hour of human labor. Instead, however, what we’ve seen is historically low productivity growth, compared to what happened in the middle and late 20th Century.

All of which leads commentators like Yglesias and Tyler Cowen to fret that the robots aren’t coming fast enough. Typical of most writers on this subject, Yglesias just worries vaguely that increases in productivity won’t happen for some unspecified reason.

I’ve argued a number of times for an explanation that connects the question of automation and productivity growth directly to wages and the general condition of labor. The basic idea is very simple. From the perspective of the boss, replacing a worker with a machine will be more appealing to the degree that the machine is:

  • Cheaper than the human worker
  • More convenient and easier to control than the human worker

This implies that if workers win higher wages and more control over their working conditions, their jobs are more likely to be automated. Indeed, arguments like this frequently crop up among critics of things like the Fight for 15 campaign, which demands higher wages for fast food workers and other low wage employees. Prototypes for automatic burger-making machines are cited in order to warn workers that their jobs are at risk of being automated away.

I regard such warnings not as arguments against higher wages, but arguments for them. Workers, in the course of fighting for their interests, drive the dialectic that forces capitalists to find less labor-intensive ways of producing. The next political task, then, is to make sure that the benefits of such innovation accrue to the masses, and not to a small class of robot owners.

What I fear most is not that all of our labor will be replaced with machines. Rather, like Matt Yglesias, I worry that it won’t—but for a slightly different reason. Again, bosses prefer workers to machines when they are cheaper and easier to control. Hence the truly dystopian prospect is that the worker herself is treated as if she were a machine, rather than being replaced by one.

Which brings us back, finally, to the Chinese lingerie merchants. The article’s author, Peter Hessler, speaks to one such merchant, and asks him to comment on the biggest problem facing Egypt. To his surprise, his subject, Lin Xianfei, has a quick answer: gender inequality.

But the point turns out not to be that Lin is some sort of secret passionate feminist. Rather, his perspective turns on the exigencies of capital accumulation. For it turns out that while one kind of patriarchy is an impediment to business, another kind can be quite valuable to the shrewd businessman.

The problem, from Lin’s perspective, is that Egyptian women in his region don’t work in wage labor at all, or if they do they only do so for short periods of time, before marrying and retreating into the home. Even worse, local norms about proper female behavior preclude taking women out of their homes to live on site in massive dormitories, as might be done in China. Thus it becomes unfeasible to run factories on 24-hour production cycles.

Hiring men, meanwhile, is out of the question—another man, Xu Xin, tells Hessler that Egyptian men are too lazy and undisciplined for manufacturing work. Hessler goes on to note that “at the start of the economic boom in China, bosses hired young women because they could be paid less and controlled more easily than men”.

He proceeds to comment that female Chinese workers turned out to be “more motivated”, as though he is identifying something distinct from their weaker power position relative to men. But it is really the same thing. “More motivated”, here, refers to the motivation to work hard for the boss, for someone else’s profits and someone else’s riches. To behave, in other words, like obedient machines. The Chinese capitalist objects to the patriarchal structure of rural Egyptian society not because it is patriarchy, then, but because it is a form of patriarchy that is inconvenient to capital accumulation.

And sure enough, faced with recalcitrant humans, the textile magnates of Egypt turn to the same solution that the Chinese electronics firm Foxconn adopted in the wake of worker uprisings there. Wang Weiqiang echoes the other industrialists’ complaints about Egyptian labor: the men are lazy, the women “will work only during the daytime”. As a result, “he intends to introduce greater mechanization in hopes of maximizing the short workday”.

Greater mechanization and the maximization of a short work day might seem tragic to the capitalist, but it summarizes the short term goal of the post-work socialist left. Ornery, demanding workers work to drive technological developments that further this goal. And the socialist-feminist rendition of this project insists that we can prevent workers from being treated as machines not by shielding them with patriarchal and paternalistic morals, but rather by insisting that men and women alike can recognize their paid and unpaid labor in order to better refuse it.

Beyond the Welfare State

December 10th, 2014  |  Published in Political Economy, Politics, Socialism, Work

Jacobin has published Seth Ackerman’s translation of an interesting interview with French sociologist Daniel Zamora, discussing his recent book about Michel Foucault’s affinities with neoliberalism. Zamora rightly points out that the “image of Foucault as being in total opposition to neoliberalism at the end of his life” is a very strained reading of a thinker whose relationship to the crisis of the 1970’s welfare state is at the very least much more ambiguous than that.

At the same time, Zamora’s argument demonstrates the limitations imposed by the displacement of “capitalism” by “neoliberalism” as a central category of left analysis. For his tacit premise seems to be that, if it can be shown that Foucault showed an “indulgence” toward neoliberalism, we must therefore put down his influence as a reactionary one. But what Foucault’s curious intersection with the project of the neoliberal right actually exemplifies, I would argue, is an ambiguity at the heart of the crisis of the 1970’s which gave rise to the neoliberal project. That he can be picked up by the right as easily as the left says much about the environment that produced him. Meanwhile, Zamora’s own reaction says something important about a distinction within the social democratic left that is worth spending some time on, which I’ll return to below.

Zamora makes much of the neoliberal move away from the attempt to reduce inequality, in the direction of targeted efforts to alleviate poverty and provide a minimum standard of living. (In a juicy bit bound to delight those of us immersed in the wonky details of empirical measures of inequality, he even quotes one of Foucault’s right-wing contemporaries positing that “the distinction between absolute poverty and relative poverty is in fact the distinction between capitalism and socialism”.) But in doing so, he elides the force of the Foucauldian critique of the welfare state. It is true that the move away from universal social provision and toward targeted aid is a hallmark of social policy in the era of welfare state retrenchment. But this is not the main point of Foucault’s argument, even by Zamora’s own telling.

Foucault, he argues, “was highly attracted to economic liberalism” because “he saw in it the possibility of a form of governmentality that was much less normative and authoritarian than the socialist and communist left.” It is possible to see this as nothing more than either reaction or naïveté, as Zamora seems to when he warns of Foucault’s mistake in putting “the mechanisms of social assistance and social insurance . . . on the same plane as the prison, the barracks, or the school.” But it’s possible to extract a different lesson about the nature of the system that Foucault was analyzing.

At the heart of Zamora’s own project, he says, is a disagreement with Geoffroy de Lagasnerie’s argument that Foucault represents “a desire to use neoliberalism to reinvent the left.” Rather, he argues “that he uses it as more than just a tool: he adopts the neoliberal view to critique the Left.”

Here we have the crux of the problem. For Zamora, the key political opposition is between “neoliberalism” and “the Left.” But neoliberalism is only a historically specific phase of capitalist class strategy, one which itself developed in the context of the particular form of welfare capitalism and class compromise that arose in the mid-20th Century. So if “the Left” is conceived primarily as a project against neoliberalism, its aims will be limited to the restoration of the pre-neoliberal order, which Zamora defines as “social security and the institutions of the working class.”

But the value of Foucault, and others like him, is in highlighting the limits of any such strategy. Postwar welfare capitalism was, to be sure, a substantive achievement of the working class and the socialist movement. And it represented an equlibrium—call it the Fordist compromise—in which workers shared in the benefits of rising productivity.

But it was also an inherently contradictory and self-subverting order. This was true both from the perspective of capital and of labor. For the capitalist, long periods of full employment and strong labor movements meant a profit squeeze and escalating political instability as workers lost their fear of unemployment and poverty. The Fordist compromise was no more satisfactory for workers, as the historian Jefferson Cowie documents in his writing on the 1970’s. What was called the “blue collar blues” represented the desire of workers for more than just higher paychecks: for more free time, for control over the labor process, for liberation from wage labor.

The welfare state institutions that arose in that context were marked by the same contradiction: they were at once sources of security and freedom, and instruments of social control. As Beatriz Preciado says, in a quote Zamora produces as evidence of the bad new libertarian left: “the welfare state is also the psychiatric hospital, the disability office, the prison, the patriarchal-colonial-heteronormative school.” One aspect of the welfare state made it dangerous to the employing class, while another chafed on the employed (and unemployed). Welfare capitalism has always been characterized by this tension between universalistic benefits tied to a universal notion of social citizenship, and carefully targeted systems of qualification and incentive designed to prop up specific social relations, from the workplace to the street to the home. This is a key insight of the school of comparative welfare state study that distinguishes the decommodifying from the stratifying elements of the welfare state.

One way to think of this is as the permeation of the contradictions of bourgeois democracy into the economic sphere. Just as capitalist democracies exist in an uneasy tension between the principles of “one person one vote” and “one dollar one vote”, so does the system of economic regulation simultaneously work to support the power of the working class and to control it.

In contrast, Zamora seems unwilling to countenance this two-sided quality to class compromises in capitalism. As he puts it, the choice is either “that social security is ultimately nothing more than a tool of social control by big capital” (a view held by unnamed persons on “the radical left”), or that the bourgeoisie “was totally hostile” to institutions that “were invented by the workers’ movement itself.”

Zamora appears to view social insurance as representing the creation of “social rights” that cushion workers from the vagaries of the market, while leaving the basic institutions of private property and wage labor in place. This is a non-Marxist form of social democracy with deep theoretical roots going back to Karl Polanyi and T.H. Marshall, and it was arguably the main way in which the European social democratic parties saw themselves in their heyday. This kind of social democracy is the protagonist in Shari Berman’s recent book on the history of European social democracy, in which the Polanyian pragmatists are pitted against Marxists who, in her view, ignored the exigencies of social reform altogether in favor of an apocalyptic insistence that the capitalist system would inevitably collapse and usher in revolution. The endpoint of this kind of Polanyian socialism is a welfare state that protects the working class from the workings of an unfettered market.

There is, however, another way to think about the welfare state from a Marxist perspective. It is possible to believe that fighting for a robust and universal welfare state is a necessary and desirable project, while at the same time believing that the socialist imagination cannot end there, because the task of humanizing capitalism generates its own contradictions. On this view, the system Foucault analyzed was a system that could not simply continue on in static equilibrium; it had to be either transcended in a socialist direction, or, as happened, dismantled in a project of capitalist retrenchment. From this perspective, the importance of figures like Foucault is not just as misleaders or budding reactionaries, but as indicators of social democracy’s limits, and of the inability of the mainstream left at the time to reckon with the crisis that its own victories had produced. By the same token, neoliberalism can be seen not just as a tool to smash the institutions of the working class, but also as a mystified and dishonest representation of the workers’ own frustrated desires for freedom and autonomy.

Zamora speaks of Foucault imagining “a neoliberalism that wouldn’t project its anthropological models on the individual, that would offer individuals greater autonomy vis-à-vis the state.” Other than the name, this does not sound much at all like the really existing neoliberal turn, which has only reconfigured the densely connected relationship between state and market rather than freeing the latter from the former. This vision of autonomy sounds more like the radical move beyond welfare capitalism, toward Wilde’s vision of socialist individualism. (Provided, that is, that we accord autonomy from bosses equal place with autonomy from the state.) Postmodernism as premature post-capitalism, as Moishe Postone once put it.

None of this is to say that the fight for universal social provision is unimportant; nor is it to dispute Zamora’s point that the fight for universal economic rights has tended, in recent times to be eclipsed by “a centering of the victim who is denied justice” as he quotes Isabelle Garo.

The point is only that it is worth thinking about what happens on the other side of such battles. Whether one finds it useful to think along these lines depends, ultimately, on what one sees as the horizon of left politics. Zamora speaks mournfully of the disappearance of exploitation and wealth inequality as touchstones of argument and organizing, and of the dismantling of systems of social insurance. Yet he himself seems unwilling to go beyond the creation and maintenance of humanized forms of exploitation, a perhaps more egalitarian (but not equal) distribution of wealth. He speaks favorably of Polanyi’s principle of “withdrawing the individual out of the laws of the market and thus reconfiguring relations of power between capital and labor”; meanwhile, André Gorz’s elevation of the “right to be lazy” is dismissed and equated with Thatcherism.

This Polanyian social democracy as a harmonious “reconfiguring” of the capital-labor relation is a far cry from the Marxist insistence on abolishing that relation altogether. But its inadequacy as either an inspiring utopia or a sustainable social order is the real lesson of the crisis that gave rise to neoliberalism. And while Foucault may not have come to all the right conclusions about addressing that crisis, he at least asked some of the right questions.

Not a riot, it’s a rebellion

August 14th, 2014  |  Published in Data, Politics


The Coup by The Coup on Grooveshark

Solidarity to the people of Ferguson, Missouri, and a hearty fuck you to the cops, their bosses, and to anyone who wants to blather about “rioters” and otherwise engage in bogus “both sides” equivalency instead of keeping the focus on the extrajudicial executions of these state-sanctioned death squads. See also Robert Stephens II for an excellent analysis of the actions of the people in Ferguson as part of a process of political mobilization rather than simply undirected vandalism.

What is happening in Missouri is horrifying, yet unusual only in the attention it’s receiving. I hope it at least wakes people up to the nature of our heavily militarized police forces—Ferguson is in no way unusual. The other day I sent my editors a draft manuscript for the longer-form adaptation of Four Futures. In discussing the fourth of those futures, Exterminism, I describe the widespread militarization of the police in the United States, which has its roots in the 1960’s but has intensified in the post-9/11 period.

This is a literal case of “bringing the war home.” Many of the tanks and other equipment that can be found even in small towns are surplus military equipment, given away to police departments when no longer needed in Iraq or Afghanistan. And of course many cops are veterans, who had a chance to learn from the American government’s callous approach to civilian life abroad. I struggled to finish that chapter, because it seemed every day brought a new and more horrifying example of what I was writing about.

It all leads here:

Cops in Ferguson

But I’m only repeating what many are now saying. As some kind of substantive contribution, I figured I’d refute a specific canard that arises from defenders of the warrior cops in situations like this. That is, that all of these trappings of military occupation are necessary because of the oh so dangerous environment the police supposedly face.

Policing is not the country’s safest job, to be sure. But as the Bureau of Labor Statistics’ Census of Fatal Occupational Injuries shows, it’s far from the most dangerous. The 2012 data reports that for “police and sheriff’s patrol officers,” the Fatal Injury Rate—that is, the “number of fatal occupational injuries per 100,000 full-time equivalent workers”—was 15.0. And that includes all causes of death—of the 105 dead officers recorded in the 2012 data, only 51 died due to “violence and other injuries by persons or animals.” Nearly as many, 48, died in “transportation incidents,” e.g., crashing their cars.

Here are some occupations with higher fatality rates than being a cop:

  • Logging workers: 129.9
  • Fishers and related fishing workers: 120.8
  • Aircraft pilots and flight engineers: 54.3
  • Roofers: 42.2
  • Structural iron and steel workers: 37.0
  • Refuse and recyclable material collectors: 32.3
  • Drivers/sales workers and truck drivers: 24.3
  • Electrical power-line installers and repairers: 23.9
  • Farmers, ranchers and other agricultural managers: 22.8
  • Construction laborers: 17.8
  • Taxi drivers and chauffeurs: 16.2
  • Maintenance and repairs workers, general: 15.7

Of these, construction labor is the one I’ve done myself. This was what our required body armor looked like.

And for good measure, some more that approach the allegedly terrifying risks of being a cop:

  • First-line supervisors of landscaping, lawn service, and groundskeeping workers: 14.7
  • Grounds maintenance workers: 14.2
  • Athletes, coaches, umpires, and related workers: 13.0

While being a cop might not be all that dangerous, being in the presence of cops certainly is. In 2012, there were a minimum of 410 people killed by police, and that includes only those reported to the FBI under the creepy category of “justifiable homicide.” The real number is probably closer to 1000.

Of course, nobody who knows anything about what police actually do, and isn’t pushing a reactionary political agenda, thinks cops actually need to be dressed in heavier armor than the occupiers of Iraq and Afghanistan. And the fact that you have a better than 1-in-1000 chance of dying in any given year in certain jobs it itself scandalous. But perhaps looking at these numbers helps put the real nature of American policing in a somewhat different perspective.

Identification Politics

June 9th, 2014  |  Published in Statistics

When I first started to learn about the world of quantitative social science, it was approaching the high tide of what I call “identificationism”. The basic argument of this movement was as follows. Lots of social scientists are crafting elaborate models that basically only show the correlations between variables. They then must rely on a lot of assumptions and theoretical arguments in order to claim that an association between X and Y is indicative of X causing Y, rather than Y causing X or both being caused by something else. This can lead to a lot of flimsy and misleading published findings.

Starting in the 1980’s, critics of these practices started to emphasize what is called, in the statistical jargon, “clean identification”. Clean identification means that your analysis is set up in a way that makes it possible to convincingly determine causal effects, not just correlations.

The most time-tested and well respected identification strategy is the randomized experiment, of the kind used in medical trials. If you randomly divide people into two groups that differ only by a single treatment, you can be pretty sure that subsequent differences between the two groups are actually caused by the treatment.

But most social science questions, especially the big and important ones, aren’t ones you can do experiments on. You can’t randomly assign one group of countries to have austerity economics, and another group to have Keynesian policies. So as a second best solution, scholars began looking for so-called “natural experiments”. These are situations where, more or less by accident, people find themselves divided into two groups arbitrarily, almost as if they had been randomized in an experiment. This allows the identification of causality in non-experimental situations.

A famous early paper using this approach was David Card and Alan Krueger’s 1992 study of the minimum wage. In 1990, New Jersey had increased its minimum wage to be the highest in the country. Card and Krueger compared employment in the fast food industry both New Jersey and eastern Pennsylvania. Their logic was that these stores didn’t differ systematically aside from the fact that some of them were subject to the higher New Jersey minimum wage, and some of them weren’t. Thus any change in employment after the New Jersey hike could be interpreted as a consequence of the higher minimum wage. In a finding that is still cited by liberal advocates, they concluded that higher minimum wages did nothing to cause lower employment, despite the predictions of textbook neoclassical economics.

This was a useful and important paper, and the early wave of natural experiment analyses produced other useful results as well. But as time went on, the obsession with identification led to a wave of studies that were obsessed with proper methodology and unconcerned with whether they were studying interesting or important topics. Steve Levitt of “Freakonomics” fame is a product of this environment, someone who would never tackle a big hard question where an easy trivial one was available.

With the pool of natural experiments reaching exhaustion, some researchers began to turn toward running their own actual experiments. Hence the rise of the so-called “randomistas”. These were people who performed randomized controlled trials, generally in poor countries, to answer small and precisely targeted questions about things like aid policy. This work includes things like Chris Blattman’s study in which money was randomly distributed to Ugandan women.

But now, if former World Bank lead economist Branko Milanovic is to be believed, the experimental identificationists are having their own day of crisis. As with the natural experiment, the randomized trial sacrifices big questions and generalizable answers in favor of conclusions that are often trivial. With their lavishly funded operations in poor countries, there’s an added aspect of liberal colonialism as well. It’s the Nick Kristof or Bono approach to helping the global poor; as Milanovic puts it, “you can play God in poor countries, publish papers, make money and feel good about yourself.”

If there’s a backlash against the obsession with causal inference, it will be a victory for people who want to use data to answer real questions. Writing about these issues years ago, I argued that:

It is often impossible to find an analytical strategy which is both free of strong assumptions about causality and applicable beyond a narrow and artificial situation. The goal of causal inference, that is, is a noble but often futile pursuit. In place of causal inference, what we must often do instead is causal interpretation, in which essentially descriptive tools (such as regression) are interpreted causally based on prior knowledge, logical argument and empirical tests that persuasively refute alternative explanations.

I still basically stand by that, or by the pithier formulation I added later, “Causal inference where possible, causal interpretation where necessary.”

Infotainment Journalism

May 14th, 2014  |  Published in Data, Statistics

We seem, mercifully, to have reached a bit of a backlash to the data journalism/explainer hype typefied by sites like Vox and Fivethirtyeight. Nevertheless, editors in search of viral content find it irresistible to crank out clever articles that purport to illuminate or explain the world with “data”.

Now, I am a big partisan of using quantitative data to understand the world. And I think the hostility to quantification in some parts of the academic Left is often misplaced. But what’s so unfortunate about the wave of shoddy data journalism is that it mostly doesn’t use data as a real tool of empirical inquiry. Instead, data becomes something you sprinkle on top of your substanceless linkbait, giving it the added appearance of having some kind of scientific weight behind it.

Some of the crappiest pop-data science comes in the form of viral maps of various kinds. Ben Blatt at Slate goes over a few of these, pertaining to things like baby names and popular bands. He shows how easy it is to craft misleading maps, even leaving aside the inherent problems with using spatial areas to represent facts about populations that occur in wildly different densities.

Having identified the pitfalls, Blatt then decided to try his hand at making his own viral map. And judging by the number of times I’ve seen his maps of the most widely spoken language in each state on Facebook, he succeeded. But in what is either a sophisticated troll or an example of “knowing too little to know what you don’t know”, Blatt’s maps themselves are pretty uninformative and misleading.

The post consists of several maps. The first simply categorizes each state according to the most commonly spoken non-English language, which is almost always Spanish. Blatt calls this map “not too interesting”, but I’d say it’s the best of the bunch. It’s the least misleading while still containing some useful information about the French-speaking clusters in the Northeast and Louisiana, and the holdout German speakers in North Dakota.

The next map, which shows the most common non-English and non-Spanish language, is also decent. It’s when he starts getting down into more and more detailed subcategories that Blatt really gets into trouble. I’ll illustrate this with the most egregious example, the map of “Most Commonly Spoken Native American Language”.

Part of the problem is the familiar statistician’s issue of sample size. The American Community Survey data that Blatt used to make his maps is extremely large, but you can still run into trouble when you’re looking at a small population and dividing it up into 50 states. Native Americans are a tiny part of the population, and those who speak an indigenous language are an even smaller fraction. The more severe issue, though, is that this map would be misleading even if it were based on a complete census of the population.

That’s because the Native American population in the United States is extremely unevenly distributed, due to the way in which the American colonial project of genocide and resettlement played out historically. In some areas, like the southwest and Alaska, there are sizable populations. In much of the east of the country, there are vanishingly small populations of people who still speak Native American languages. And without even going to the original data (although I did do that), you can see that there are some things majorly wrong here. But you need a passing familiarity with the indigenous language families of North America, which is basically what I have from a cursory study of them as a linguistics major over a decade ago.

We see that Navajo is the most commonly spoken native language in New Mexico. That’s a fairly interesting fact, as it reflects a sizeable population of around 63,000 speakers. But then, we could have seen that already from the previous “non-English and Spanish speakers” map.

But now look at the northeast. We find that the most commonly spoken native language in New Hampshire is Hopi; in Connecticut it’s Navajo; in New Jersey it’s Sahaptian. What does this tell us? The answer is, approximately nothing. The Navajo and Hopi languages originate in the southwest, and the Sahaptian languages in the Pacific northwest, so these values just reflect a handful of people who moved to the east coast for whatever reason. And a handful of people it is: do we really learn anything from the fact there are 36 Hopi speakers in New Hampshire, compared to only 24 speaking Muskogee (which originates in the south)? That is, if we could even know these were the right numbers. The standard errors on these estimates are larger than the estimates themselves, meaning that there is a very good chance that Muskogee, or some other language, is actually the most common native language in New Hampshire.

I suppose this could be regarded as nitpicking, as could the similar things I could say about some of the other maps. Boy, finding out about those 170 Gujurati speakers in Wyoming sure shows me what sets that state apart from its neighbors! OMG, the few hundred Norwegian speakers in Hawaii might slightly outnumber the Swedish speakers! (Or not.) Even the “non-English and Spanish” map, which I generally kind of like, doesn’t quite say as much as it appears—or at least not what it appears to say. The large “German belt” in the plains and mountain west reflects low linguistic diversity more than a preponderance of Krauts. There is a small group of German speakers almost everywhere; in most of these states, the percentage of German speakers isn’t much greater than the national average, which is well under 1 percent. In some, like Idaho and Tennessee, it’s actually lower.

I belabor all this because I take data analysis seriously. The processing and presentation of quantitative data is a key way that facts are manufactured, a source of things people “know” about the world. So it bothers me to see the discursive pollution of things that are essentially vacuous “infotainment” dressed up in fancy terms like “data science” and “data journalism”. I mean, I get it: it’s fun to play with data and make maps! I just wish people would leave their experiments on their hard drives rather than setting them loose onto Facebook where they can mislead the unwary.

Trumbo’s Taxes

April 15th, 2014  |  Published in Data, Statistical Graphics

Having filed my taxes in my customarily last-minute fashion, I thought I’d get in on the tax day blogging thing. Via Sarah Jaffe, I came upon the following interesting passage from Victor Navasky’s history of the Hollywood blacklist, Naming Names:

Conversely, during the blacklist years, which were also tight money years for the studios, agents often found it simpler to hint to their less talented clients that their difficulties were political rather than intrinsic. Since agents as a class follow the money, it is perhaps a clue to the environment of fear within which they operated that, for example, the Berg-Allenberg Agency was, even in late 1948, ready, eager, willing, and able to lose its most profitable client, Dalton Trumbo (at $3000 per week he was one of the highest paid writers in Hollywood)—and this even before the more general system of blacklisting had gone into effect.

The first thing that struck me about this that wow, that’s a lot of money. It’s not clear where the figure came from. But Navasky did interview Trumbo for the book, so I have to assume it came from the man himself. Now, presumably Trumbo wasn’t working all the time, but rather getting picked up for various jobs with slack periods in between. But supposing for a moment that he did: $3000 a week (or $156,000 a year) would be a pretty cushy life now, so it would have been an astronomical amount of money in 1948. (And it’s highly likely that there were people in Hollywood who were making that much. Ben Hecht is said to have gotten $10,000 a week.)

The second thing is to note that even being as rich and famous as Dalton Trumbo wasn’t enough to protect him from the blacklist. In general, of course, the rich stick together and protect their own. But there are some lines you still can’t cross, and the blacklist was one of them. In the end, ideological discipline trumped the solidarity of rich people. Which is what makes the rare radical defectors from the ruling class so significant.

But my final thought was, I wonder what Trumbo’s net income would have been, had he made that much money? After all, that was the heyday of high marginal tax rates in the United States, those legendary 90 percent tax brackets that seem so unimaginable to people now. So I got to wondering how much Trumbo would have paid in taxes then, and how much he would have paid on a comparable amount of money today.

Fortunately, the Tax Foundation provides excellent data on historical tax rates. I used the spreadsheet here, which describes the federal income tax regimes from 1913 to 2013. Using that data, we can get a rough approximation of how much our hypothetical Dalton Trumbo would have paid in taxes, although of course it doesn’t take into account any particular deductions or loopholes that may have played into an individual situation—and it’s well known that few people actually paid the very high marginal rates of that time. So take this as a quick sketch, meant to demonstrate two things. First, how much our tax rates have changed, and second, how marginal tax rates really work.

Here’s a table showing how Trumbo’s income would have broken down in 1948. Each line shows a single tax bracket. The first three lines show that rate at which income in that bracket was taxed, and the lower and upper bounds that defined which income was taxed at that rate. The last two columns show how much income Trumbo received in each bracket, and how much tax he would have owed on it.

Tax RateOverBut Not OverIncomeTaxes
20.0%$0 $2,000 $2,000$400.00
22.0%$2,000 $4,000 $2,000$440.00
26.0%$4,000 $6,000 $2,000$520.00
30.0%$6,000 $8,000 $2,000$600.00
34.0%$8,000 $10,000 $2,000$680.00
38.0%$10,000 $12,000 $2,000$760.00
43.0%$12,000 $14,000 $2,000$860.00
47.0%$14,000 $16,000 $2,000$940.00
50.0%$16,000 $18,000 $2,000$1,000.00
53.0%$18,000 $20,000 $2,000$1,060.00
56.0%$20,000 $22,000 $2,000$1,120.00
59.0%$22,000 $26,000 $4,000$2,360.00
62.0%$26,000 $32,000 $6,000$3,720.00
65.0%$32,000 $38,000 $6,000$3,900.00
69.0%$38,000 $44,000 $6,000$4,140.00
72.0%$44,000 $50,000 $6,000$4,320.00
75.0%$50,000 $60,000 $10,000$7,500.00
78.0%$60,000 $70,000 $10,000$7,800.00
81.0%$70,000 $80,000 $10,000$8,100.00
84.0%$80,000 $90,000 $10,000$8,400.00
87.0%$90,000 $100,000 $10,000$8,700.00
89.0%$100,000 $150,000 $50,000$44,500.00
90.0%$150,000 $200,000 $6,000$5,400.00
91.0%$200,000 $0$0.00

This is a nice illustration of how marginal tax rates work. There is still, unbelievably, widepread confusion about this. People think that if the marginal tax rate is 90 percent on income over $150,000—as it was in 1948—then that means you’ll only keep 10 percent of all your income if you make that much money. But Trumbo wouldn’t pay 90 percent on all of his $156,000, only on the $6000 that was over the $150,000 threshold.

So what was Trumbo’s real, overall tax rate? The tax figures above sum up to a total bill of $117,220. The Tax Foundation data also describes some additional reductions that were applied that year: 17 percent on taxes up to $400, 12 percent on taxes from $400 to $100,000, and 9.75 percent on taxes above $100,000. Taking those reductions into account, the tax bill comes down to $103,521.

So Trumbo would have had a net income of $52,479 in 1948, for an effective tax rate of 66 percent. Now, that’s not 90 percent, but some will surely say that this seems like an unreasonably high level, for reasons of fairness or work incentives or whatever. But let’s keep in mind just how where our Trumbo falls in the 1948 United States’ distribution of income. Here’s a graphical representation of the above data:


Each bar is a tax bracket. The width of the bar shows how wide the bracket is, while the height shows the income earned in that bracket. The red-shaded portion shows how much of that income was paid in tax. This is a bit visually misleading, because the amount of income in each bar corresponds only to the height of the box, not its volume. But I’ll swallow my data-visualization pride for the sake of a quick blog post.

A few things to note about this graph. You can see how much of the income in the higher brackets was taxed away, due to the extremely high rates there. You can also see that the tax system is progressive, because the height of the red bars slopes upward, even when the amount of money contained in the brackets remains the same. But the most important thing to pay attention to is that dotted line that you can barely see on the far left. That’s the median personal income in the United States for 1948, which according to the Census Bureau was around $1900. In other words, almost all of this would have been irrelevant to half the population, who would have paid just the lowest rate, 20 percent, on all of their income.

If we adjust Trumbo’s income for inflation with the Consumer Price Index, his income would be equivalent to over 1.5 million dollars today. And the tax bill would have been over 1 million dollars. But how would that kind of pay be taxed now? Here’s a table like the one above, except applying current tax rates to Trumbo’s inflation-adjusted pay:

Tax RateOverBut Not OverIncomeTaxes
39.6%$450,000 $1,066,944$422,509.82

What a difference 65 years and two generations of neoliberalism makes! Now Trumbo’s effective tax rate is only 36.15 percent, and he takes home $968,000 after a $548,000 tax bill. To finish things up, here’s a graphical representation like the one above:


This time, most of the income falls into the top bracket. But since the rate there is only 39.6 percent, our hypothetical 2013 Trumbo still keeps most of his money. And once again, these brackets are mostly irrelevant to most of the population—note the line marking median income.

The punchline to this story, of course, is that it was things like the Hollywood blacklist that helped set the stage for the period of conservative reaction that gave us these tax rates. Check this nice documentary on Dalton Trumbo to get a sense of a Hollywood radical who puts most of our contemporary celebrity liberals to shame.

The spreadsheet used to estimate these figures is here, if you care to play with it yourself.