xkcd.com/386

Post-Work: A guide for the perplexed

February 25th, 2013  |  Published in Politics, Socialism, Work, xkcd.com/386

In Sunday’s New York Times, conservative columnist Ross Douthat invokes the utopian dream of “a society rich enough that fewer and fewer people need to work—a society where leisure becomes universally accessible, where part-time jobs replace the regimented workweek, and where living standards keep rising even though more people have left the work force altogether.” This “post-work” politics may be unfamiliar to many readers of the Times, but it won’t be new to readers of Jacobin.

Post-work socialism has a proud, if dissident tradition, from Paul Lafargue to Oscar Wilde to Bertrand Russell to André Gorz. It’s a vision that animates my writing on topics ranging across the contradictions of the work ethic, the possibilities of a post-scarcity society, the politics of sex work, and the connection between post-work politics and feminism. Others have addressed related themes, like Chris Maisano on shorter working hours as both a response to unemployment and a step forward for human freedom, and Sarah Leonard on the pro-work corporate feminism of Marissa Mayer.

The basic vision of the post-work Left, then, is one of fewer jobs, and shorter hours at the jobs we do have. Douthat suggests, however, that this vision is already becoming a reality, and he warns that it is not a result we should welcome.

It’s something of a victory that a New York Times columnist is even acknowledging the post-work perspective on labor politics, rather than ignoring it completely. Hopefully he’s been taking his own advice, and reading about it in Jacobin. But Douthat’s take is a rather peculiar one. To begin with, he claims that we have entered an era of “post-employment, in which people drop out of the work force and find ways to live, more or less permanently, without a steady job”. But it’s not clear what he bases this claim on. It’s true that labor force participation rates—the percentage of the working-age population that is employed or looking for work—has declined in recent years. From a high of around 67 percent in the late 1990’s, it declined to around 66 percent before the beginning of the last recession. The recession itself then produced another sharp decline, and the rate now stands below 64 percent.

Unfortunately, it’s unlikely that this reflects masses of people taking advantage of our material abundance to increase their leisure time. As those numbers show, most of the decline in the participation rate was due to the recession (and some of the rest is probably due to demographic shifts). If the economy returned to full employment—that is, if everyone who wanted a job could actually find one—the participation rate would probably rise again. For how else are people supposed to “find ways to live . . . without a steady job”, when incomes have stayed flat for decades despite great increases in productivity?

The post-work landscape that Douthat discovers is therefore very different than the one you’ll find surveyed in the pages of Jacobin. An economy in which people must get by on some combination of scant public benefits, charity, and hustling—because they are unable to find a job—is very different from a world where people are able to make a real choice to either cut back their hours or drop out of paid work entirely for a period of time. That’s why, in different ways, Maisano, myself, and Seth Ackerman have all emphasized that full employment is central to the project of work reduction, because tight labor markets give workers the bargaining power to demand shorter hours even without cuts in pay. And it’s why I have especially emphasized the demand for a Universal Basic Income, which would make it possible to survive outside of paid labor for a much larger segment of the population.

If Douthat’s account of labor force participation is misleading, his account of working time is equally incomplete. “Long hours”, he claims, “are increasingly the province of the rich.” While this claim isn’t precisely wrong, at least within certain narrow parameters, it obscures much more than it reveals. Douthat links to an economic study that finds longer average weekly hours among those at the top of the wage distribution, relative to those at the bottom. This is not a unique finding; the sociologists Jerry Jacobs and Kathleen Gerson found something similar in their study The Time Divide. And as it happens, I have some published academic research on the topic as well. In many rich countries, including the United States, highly educated workers (e.g., those with college degrees) report longer average work weeks than the less educated (who also tend to be lower waged, of course).

This finding is often deployed to dismiss the significance of long hours, much the way Douthat does here. If the longest hours are being worked by those who presumably have the most power and leverage in the labor market, the argument goes, then long hours shouldn’t be such a concern. But this is wrong for several reasons.

First, just because hours are longest at the top end of the wage distribution doesn’t mean they aren’t long elsewhere as well—in my research, I found that reported average hours among men were above 40 hours per week across all educational categories. And hours on the job doesn’t cover all the other time people spend working: time spent commuting to work, time spent performing unpaid household and care work (which those on low wages often can’t buy paid replacements for), and what the sociologist Guy Standing calls “work-for-labor”: the work of looking for jobs, navigating state and private bureaucracies, networking, and other things that are preconditions for getting work but are themselves unpaid.

Second, working time is characterized by pervasive mismatches between hours and preferences, which are more complicated than just hours that are “too long”. Jeremy Reynolds has found that a majority of workers say that they would like to work a different schedule than they do, but that these preferences are split between those who would like to work less and those who would like more hours—overemployment alongside underemployment.

The finding that many people report working fewer hours than they would like reflects an economy in which many low-wage workers face uncertain schedules and enforced part-time hours that exclude them from benefits. These workers would clearly benefit from predictable hours, higher wages, and recourse to good health care benefits that aren’t tied to employment, but it’s far from clear that they would benefit from more work, as such.

And Douthat would almost seem to agree. In a passage I could have written myself, he says:

There is a certain air of irresponsibility to giving up on employment altogether, of course. But while pundits who tap on keyboards for a living like to extol the inherent dignity of labor, we aren’t the ones stocking shelves at Walmart or hunting wearily, week after week, for a job that probably pays less than our last one did. One could make the case that the right to not have a boss is actually the hardest won of modern freedoms: should it really trouble us if more people in a rich society end up exercising it?

Amazingly, he follows this up by answering that last question with a resounding yes. And I might almost be inclined to follow him, if he based his conclusion on the argument I’ve just presented: that in an environment of pervasive unemployment, high costs of living, and a meager and narrowly targeted welfare state, the loss of work isn’t exactly something to celebrate.

Perhaps realizing, however, that this austere vision is hardly a compelling case for the conservative worldview, Douthat tries a different tack. Having acknowledged the implausibility of the “dignity of labor” case for much actually-existing work, he neverthelsss moves right on to the claim that “even a grinding job tends to be an important source of social capital, providing everyday structure for people who live alone, a place to meet friends and kindle romances for people who lack other forms of community, a path away from crime and prison for young men, an example to children and a source of self-respect for parents.” He concludes with an appeal to the importance of “human flourishing”, but it’s hard to see much social capital, lasting interpersonal connection, or human flourishing going on in the Amazon warehouse—or for that matter, at Pret a Manger.

Although it’s pitched in a kindlier, New York Times-friendly tone, Douthat’s argument is reminiscent of Charles Murray’s argument that the working class needs the discipline and control provided by working for the boss, lest they come socially unglued altogether. Good moralistic scold that he is, Douthat sees the decline of work as part of “the broader turn away from community in America—from family breakdown and declining churchgoing to the retreat into the virtual forms of sport and sex and friendship.” It seems more plausible that it is neoliberal economic conditions themselves—a scaled back social safety net, precarious employment, rising, debts and uncertain incomes—that has produced whatever increase in anomie and isolation we experience. The answer to that is not more work but more protection from the life’s unpredictable risks, more income, more equality, more democracy—and more time beyond work to take advantage of all of it.

The Change is Too Damn Fast

March 13th, 2012  |  Published in Politics, xkcd.com/386

Matt Yglesias has responded to me, although in a way that sort of misses the point I was trying to make.

Part of his post is given over to reiterating the position that increasing the amount of housing stock in desirable cities would be a correct and egalitarian thing to do, even if it inconveniences some of the incumbent owners and residents. Let me emphasize that I agree with this. But he goes on to speculate that I hedged my position because it “makes [me] feel icky to embrace deregulation”, as though my critique were a symptom of an affective disorder.

That really isn’t the point. I’m actually quite a bit farther toward the left-neoliberal “deregulate and redistribute” end of things than many of my comrades on the Left. My argument—which was meant as a self-critique of my own tendencies as much as Yglesias’—is that we need to be attentive to the people’s legitimate objections to rapid change, which complicate any project that wants to substantially rearrange the existing order.

Yglesias doesn’t really respond to my argument that his overall deregulatory project tends to make life more volatile, when stability is itself a value to a lot of people. What he does say, in response to my comment that “there’s no a priori reason to say that the desire to have a stable, predictable life or job or neighborhood is less valid than the desire to maximize economic growth”, is that:

The question is not whether some fixed pool of people should give up stability in exchange for more money. The question is whether the incumbents should be asked to give up some stability for the sake of other people who are currently excluded from the opportunities the incumbents enjoy. My answer is that yes they should. That we should work toward plentiful housing not merely for its own sake, but precisely for the sake of equality.

The language of “incumbents” and “insiders” plays a central role in the neoliberal critique of regulation, whether in land use or in the labor market. And it’s an argument I have some sympathy for. One of the things that most irks me about progressive nostalgia for the post-New Deal golden age is the way it elides the exclusions—of non-whites, of women, of non-union members—that made up the other side of stable high wage employment for the white male breadwinner.

But if an analyst portrays the issue merely in terms of a few insiders and an excluded mass, then he sets himself too easy a task. It’s not just rich owners of San Francisco real estate who benefit from some kind of “insider” status. Many of us are insiders, whether due to rent regulations or union membership or occupational licensing. In any particular case, it’s easy to set this up as a matter of egalitarianism and access. But generalized across the entire economy, what this amounts to is everyone (or most people) losing stability to some degree, in return for everyone having more freedom and access. There can be a tradeoff between equality and stability, and my point was simply that it is a tradeoff. And it’s the unwillingness to jump into the whirlwind of market relations that I think drives some of the revulsion at Yglesias’ political project from certain quarters.

People want, and have always wanted, institutions that protect them from the pressures of the market. Even if one would like people to act as perfect left-neoliberal subjects—obeying the dictates of the profit motive by day, enjoying their generous transfer payments by night—the historical evidence is that people rarely behave that way. This argument is basically drawn from Polanyi; here is how the deregulators of an earlier age are criticized in The Great Transformation:

Nowhere has liberal philosophy failed so conspicuously as in its understanding of the problem of change. Fired by an emotional faith in spontaneity, the common-sense attitude toward change was dis­carded in favor of a mystical readiness to accept the social conse­quences of economic improvement, whatever they might be. The ele­mentary truths of political science and statecraft were first discredited then forgotten. It should need no elaboration that a process of undi­rected change, the pace of which is deemed too fast, should be slowed down, if possible, so as to safeguard the welfare of the community. Such household truths of traditional statesmanship, often merely re­-teachings of a social philosophy inherited from the an­cients, were in the nineteenth century erased from the thoughts of the educated by the corrosive of a crude utilitarianism combined with an uncritical reliance on the alleged self-healing virtues of unconscious growth.

Polanyi’s argument wasn’t merely a normative one, but an analysis of history. He argued that industrial society was characterized by a “double movement”, in which efforts to subordinate society to the self-regulating market were met with the “self-protection of society”. This entailed efforts to impose limits on the market’s control over the “fictitious commodities”: labor, money, and, yes, land. It should be noted that Polanyi believed that the cataclysmic changes wrought by capitalism—the enclosures, the industrial revolution—were on balance good things for humanity. But he believed that someone needed to stand athwart history yelling “slow down!”:

A belief in spontaneous progress must make us blind to the role of government in economic life. This role consists often in altering the rate of change, speeding it up or slowing it down as the case may be; if we believe that rate to be unalterable—or even worse, if we deem it a sacrilege to interfere with it—then, of course, no room is left for intervention.

When it comes to the abundance-stability tradeoff, Yglesias and I are more on the same side than not—I’m ready to move in the direction of abundance, relative to the status quo. But we still have to take into account the disruptive impact of removing someone’s “insider” protection—whether it’s a restrictive zoning ordinance or an occupational licensing scheme. The insiders have be either persuaded, bribed, or coerced into giving up their privileges. And since a large proportion of Americans are “insiders” in one or another part of the economy, figuring out how to strike this balance has major implications for the democratic legitimacy, achievability, and feasibility of the project Yglesias is advocating. Which is why I spend so much time talking about ways to counteract the volatility of life in contemporary capitalism—like, for instance, the basic income—without reproducing insider-outside dynamics.

The Perils of Extrapolation

November 18th, 2011  |  Published in Political Economy, xkcd.com/386

So Kevin Drum and Matt Yglesias have read Erik Brynjolfsson and Andrew McAffee’s Race Against the Machine e-book, and both of them managed to come away impressed by the exact argument that I identified as the weakest part of the book’s case. Namely, the belief the Moore’s law—which stipulates that computer processing power increases at an exponential rate—can be extrapolated into the indefinite future. It’s true that Moore’s law seems to have held fairly well up to this point; and as Drum and Yglesias observe, if you keep extending it into the future, then pretty soon computing power will shoot up at an astronomically fast rate—that’s just the nature of exponential functions. On this basis, Drum predicts that artificial intelligence is “going to go from 10% of a human brain to 100% of a human brain, and it’s going to seem like it came from nowhere”, while Yglesias more generally remarks that “we’re used to the idea of rapid improvements in information technology, but we’re actually standing on the precipice of changes that are much larger in scale than what we’ve seen thus far.”

Let’s revisit the problem with this argument, which I laid out in my review. The gist of it is that just because you think you’re witnessing exponential progress, that doesn’t mean you should expect that same rate of exponential growth to continue indefinitely. I’ll turn the mic over to Charles Stross, from whom I picked up this line of critique:

Around 1950, everyone tended to look at what the future held in terms of improvements in transportation speed.

But as we know now, that wasn’t where the big improvements were going to come from. The automation of information systems just weren’t on the map, other than in the crudest sense — punched card sorting and collating machines and desktop calculators.

We can plot a graph of computing power against time that, prior to 1900, looks remarkably similar to the graph of maximum speed against time. Basically it’s a flat line from prehistory up to the invention, in the seventeenth or eighteenth century, of the first mechanical calculating machines. It gradually rises as mechanical calculators become more sophisticated, then in the late 1930s and 1940s it starts to rise steeply. From 1960 onwards, with the transition to solid state digital electronics, it’s been necessary to switch to a logarithmic scale to even keep sight of this graph.

It’s worth noting that the complexity of the problems we can solve with computers has not risen as rapidly as their performance would suggest to a naive bystander. This is largely because interesting problems tend to be complex, and computational complexity rarely scales linearly with the number of inputs; we haven’t seen the same breakthroughs in the theory of algorithmics that we’ve seen in the engineering practicalities of building incrementally faster machines.

Speaking of engineering practicalities, I’m sure everyone here has heard of Moore’s Law. Gordon Moore of Intel coined this one back in 1965 when he observed that the number of transistor count on an integrated circuit for minimum component cost doubles every 24 months. This isn’t just about the number of transistors on a chip, but the density of transistors. A similar law seems to govern storage density in bits per unit area for rotating media.

As a given circuit becomes physically smaller, the time taken for a signal to propagate across it decreases — and if it’s printed on a material of a given resistivity, the amount of power dissipated in the process decreases. (I hope I’ve got that right: my basic physics is a little rusty.) So we get faster operation, or we get lower power operation, by going smaller.

We know that Moore’s Law has some way to run before we run up against the irreducible limit to downsizing. However, it looks unlikely that we’ll ever be able to build circuits where the component count exceeds the number of component atoms, so I’m going to draw a line in the sand and suggest that this exponential increase in component count isn’t going to go on forever; it’s going to stop around the time we wake up and discover we’ve hit the nanoscale limits.

So to summarize: transportation technology looked like it was improving exponentially, which caused people to extrapolate that forward into the future. Hence the futurists and science fiction writers of the 1950s envisioned a future with flying cars and voyages to other planets. But what actually happened was that transportation innovation plateaued, and a completely different area, communications, became the source of major breakthroughs. And that’s because, as Stross says later in the essay, “new technological fields show a curve of accelerating progress — until it hits a plateau and slows down rapidly. It’s the familiar sigmoid curve.”

And as Stross says elsewhere, “the first half of a sigmoid demand curve looks like an exponential function.” This is what he means:

Sigmoid and exponential curves

The red line in that image is an exponential function, and the black line is a sigmoid curve. Think of these as two possible paths of technological development over time. If you’re somewhere around that black X mark, you won’t really be able to tell which curve you’re on.

But I’m inclined to agree with Stross that we’re more likely to be on the sigmoid path than the exponential one, when it comes to microprocessors. That doesn’t mean that we’ll hit a plateau with no big technological changes at all. It’s just that, as Stross says in yet another place:

New technologies slow down radically after a period of rapid change during their assimilation. However, I can see a series of overlapping sigmoid curves that might resemble an ongoing hyperbolic curve if you superimpose them on one another, each segment representing the period of maximum change as a new technology appears.

Hence economic growth as a whole can still look like it’s following an exponential path.

None of which is to say that I wholly reject the thesis of Brynjolfsson and McAffee’s book—see the review for my thoughts on that. In a way, I think Drum and Yglesias are underselling just how weird and disruptive the future of technology will be—it’s not just that it will be rapid, but that it will come in areas we can’t even imagine yet. But we should be really wary of simply extending present trends into the future—our recent history of speculative economic manias should have taught us that if something can’t go on forever, it will stop.

The Fog of War and the Case for Knee-jerk Anti-Interventionism

November 10th, 2011  |  Published in Imperialism, Politics, xkcd.com/386

In my last post on Libya, I took a sort of squishy position: while avoiding a direct endorsement of the NATO military campaign there, I wanted to defend the existence of a genuine internal revolutionary dynamic, rather than dismissing the resistance to Gaddafi as merely the puppets of Western imperialism. I still basically stand by that position, and I still think the ultimate trajectory of Libya remains in doubt. But all that aside, it’s important to look back carefully at the run-up to the military intervention. A couple of recent essays have tried to do so—one of them is an exemplary struggle to get at the real facts around the decision to go to war, while the other typifies the detestable self-congratulatory moralizing of the West’s liberal warmongers.

The right way to look back on Libya is this article in the London Review of Books, which I found by way of Corey Robin. Hugh Roberts, formerly of the International Crisis Group, casts a very skeptical eye on the claims made by the the NATO powers in the run-up to war, and on the intentions of those who were eager to intervene on the side of the Libyan rebels. At the same time, he acknowledges the intolerable nature of the Gaddafi regime and accepts the reality of an internally-generated political resistance that was not merely fabricated by external powers. But rather than accepting the claims of foreign powers at face value, he shows all the ways in which NATO actually managed to subvert the emergence of a real democratic political alternative in Libya, and he leaves me wondering once again whether the revolution would have been better off if it could have proceeded without external interference.

There are a few particularly important points that I want to draw out of Roberts’ essay. First, he shows that, in a pattern that is familiar from the recent history of “humanitarian” interventions, many of the claims that were used to justify the imminent necessity of war do not hold up under scrutiny. First, there is the claim that military force had to be used because all other options had been exhausted. As Roberts observes:

Resolution 1973 was passed in New York late in the evening of 17 March. The next day, Gaddafi, whose forces were camped on the southern edge of Benghazi, announced a ceasefire in conformity with Article 1 and proposed a political dialogue in line with Article 2. What the Security Council demanded and suggested, he provided in a matter of hours. His ceasefire was immediately rejected on behalf of the NTC by a senior rebel commander, Khalifa Haftar, and dismissed by Western governments. ‘We will judge him by his actions not his words,’ David Cameron declared, implying that Gaddafi was expected to deliver a complete ceasefire by himself: that is, not only order his troops to cease fire but ensure this ceasefire was maintained indefinitely despite the fact that the NTC was refusing to reciprocate. Cameron’s comment also took no account of the fact that Article 1 of Resolution 1973 did not of course place the burden of a ceasefire exclusively on Gaddafi. No sooner had Cameron covered for the NTC’s unmistakable violation of Resolution 1973 than Obama weighed in, insisting that for Gaddafi’s ceasefire to count for anything he would (in addition to sustaining it indefinitely, single-handed, irrespective of the NTC) have to withdraw his forces not only from Benghazi but also from Misrata and from the most important towns his troops had retaken from the rebellion, Ajdabiya in the east and Zawiya in the west – in other words, he had to accept strategic defeat in advance. These conditions, which were impossible for Gaddafi to accept, were absent from Article 1.

Whether or not you believe that the Gaddafi side would ever have seriously engaged in negotiations over a peaceful settlement, or whether you think such negotiations would have been preferable to complete rebel military victory, it seems clear that the NATO powers never really gave them the chance. This is reminiscent of what happened prior to the bombing of Serbia in 1999: NATO started bombing after claiming that Serbia refused a peaceful settlement of the Kosovo conflict. What actually happened was that NATO presented the Serbs with a “settlement” that would have given NATO troops the right to essentially take control of Serbia. The Serbs understandably objected to this, though they were willing to accept international peacekeepers. But this wasn’t enough for NATO, and so it was bombs away.

A second element of the brief for the Libya war that Roberts highlights is the peculiar case of the imminent Benghazi massacre. Recall that among the war’s proponents, it was taken as accepted fact that, when NATO intervened, Gaddafi’s forces were on the verge of conducting a genocidal massacre of civilians in rebel-held Benghazi, and thereby snuffing out any hope for the revolution. Here is what Roberts has to say about that:

Gaddafi dealt with many revolts over the years. He invariably quashed them by force and usually executed the ringleaders. The NTC and other rebel leaders had good reason to fear that once Benghazi had fallen to government troops they would be rounded up and made to pay the price. So it was natural that they should try to convince the ‘international community’ that it was not only their lives that were at stake, but those of thousands of ordinary civilians. But in retaking the towns that the uprising had briefly wrested from the government’s control, Gaddafi’s forces had committed no massacres at all; the fighting had been bitter and bloody, but there had been nothing remotely resembling the slaughter at Srebrenica, let alone in Rwanda. The only known massacre carried out during Gaddafi’s rule was the killing of some 1200 Islamist prisoners at Abu Salim prison in 1996. This was a very dark affair, and whether or not Gaddafi ordered it, it is fair to hold him responsible for it. It was therefore reasonable to be concerned about what the regime might do and how its forces would behave in Benghazi once they had retaken it, and to deter Gaddafi from ordering or allowing any excesses. But that is not what was decided. What was decided was to declare Gaddafi guilty in advance of a massacre of defenceless civilians and instigate the process of destroying his regime and him (and his family) by way of punishment of a crime he was yet to commit, and actually unlikely to commit, and to persist with this process despite his repeated offers to suspend military action.

Roberts goes on to cast doubt on one of the specific claims of atrocity against Gaddafi: that his air force was strafing protestors on the ground. This claim was widely propagated by media like Al-Jazeera and liberal war-cheerleaders like Juan Cole, but Roberts finds no convincing evidence that it ever actually occurred. Reporters who were in Libya didn’t get reports of it, nor is there any photographic evidence—this despite the ubiquity of cell-phone camera footage in the wave of recent uprisings. The evaporation of the sensational allegation calls to mind the run-up to yet another war: the first Gulf War, when the invasion of Iraq was sold, in part, by way of a thoroughly made up story about Iraqi troops ripping Kuwaiti babies out of incubators and leaving them to die.

Beyond revealing the weakness of the empirical case for war, Roberts also highlights something I hadn’t really thought of before: the way the West’s case for intervention promotes an anti-political and undemocratic framing of the conflict that has a lot in common with the sort of anti-ideological elite “non-partisanship” that I wrote about a couple of weeks ago in the context of domestic politics. Roberts observes that the NATO powers portrayed themselves as the defenders of an undifferentiated “Libyan people” rather than partisans taking one side in a civil war. By doing so, they short-circuited the development of a real political division within Libyan society, a development that in itself was a desirable process:

The idea that Gaddafi represented nothing in Libyan society, that he was taking on his entire people and his people were all against him was another distortion of the facts. As we now know from the length of the war, the huge pro-Gaddafi demonstration in Tripoli on 1 July, the fierce resistance Gaddafi’s forces put up, the month it took the rebels to get anywhere at all at Bani Walid and the further month at Sirte, Gaddafi’s regime enjoyed a substantial measure of support, as the NTC did. Libyan society was divided and political division was in itself a hopeful development since it signified the end of the old political unanimity enjoined and maintained by the Jamahiriyya. In this light, the Western governments’ portrayal of ‘the Libyan people’ as uniformly ranged against Gaddafi had a sinister implication, precisely because it insinuated a new Western-sponsored unanimity back into Libyan life. This profoundly undemocratic idea followed naturally from the equally undemocratic idea that, in the absence of electoral consultation or even an opinion poll to ascertain the Libyans’ actual views, the British, French and American governments had the right and authority to determine who was part of the Libyan people and who wasn’t. No one supporting the Gaddafi regime counted. Because they were not part of ‘the Libyan people’ they could not be among the civilians to be protected, even if they were civilians as a matter of mere fact. And they were not protected; they were killed by Nato air strikes as well as by uncontrolled rebel units. The number of such civilian victims on the wrong side of the war must be many times the total death toll as of 21 February. But they don’t count, any more than the thousands of young men in Gaddafi’s army who innocently imagined that they too were part of ‘the Libyan people’ and were only doing their duty to the state counted when they were incinerated by Nato’s planes or extra-judicially executed en masse after capture, as in Sirte.

It’s possible, after reading all of Roberts’ essay, to remain convinced that the NATO attack was a lesser evil on balance, and to retain some optimism about the future trajectory of Libya. But he nevertheless provides an important reminder of just why it’s so important to beware of Presidents bearing “humanitarian” interventions. The liberal war-mongering crowd likes to deride those of us who bring strongly anti-interventionist biases into these debates, on the grounds that we are irrationally prejudiced against the United States, or against the possible benefits of war. But in the immediate prelude to war, such biases are in fact entirely rational, precisely because the real dynamics on the ground are so murky and hard to determine, and the arguments used to justify intervention so often turn out to be illusory after the fact.

This reality does not, however, prevent the liberal hawk faction from coming out with some triumphant breast-beating and score-settling when their little war looks to be a “success”. Michael Berube has a new essay in this genre, and it’s terrible in all the ways the Roberts essay is excellent. In both tone and content, it’s a shameful piece of writing, and Berube should be embarrassed to have written it—but since it placates the tortured soul of the liberal bombardier, he is instead hailed as a brave and sophisticated thinker.

Berube argues that opponents of the war in Libya are fatally flawed by a “manichean” approach to foreign policy: rather than appreciate the nuances of the situation in Libya, he claims, opponents of the war lazily fell back on “tropes that have been forged over the past four decades of antiwar activism”. These tropes, says Berube, are an impediment to forging “a rigorously internationalist left in the U.S., a left that will promote and support the freedom of speech, the freedom to worship, the freedom from want, and the freedom from fear—even on those rare and valuable occasions when doing so puts one in the position of supporting U.S. policies.”

This is, I suppose, an improvement on Michael Walzer’s call for a “decent” left (where “decency” consists of an appropriate deference to U.S. imperial propaganda). But as the Roberts essay shows, the pro-war faction are on shaky ground when they accuse others of relying on a ritualized set of tropes: the imminent humanitarian disaster and the impossibility of a non-military solution are themselves the repetitive–and routinely discredited–way in which war is sold to those who consider themselves liberals and internationalists. The eagerness of people like Berube to pick up on any thinly-sourced claim that vindicates the imminence and necessity of bombs suggests that the case for humanitarian intervention has become increasingly routinized as the Libyas, Iraqs, and Serbias pile up.

And it is striking that, in contrast to the careful skepticism of Roberts, Berube simply assumes that NATO action was necessary to prevent imminent catastrophe. In doing so, he evades all the difficult questions that arise in the Roberts essay. He relies, for example, on Juan Cole’s refutation of numerous alleged “myths” of the anti-interventionists; among them is the argument that “Qaddafi would not have killed or imprisoned large numbers of dissidents in Benghazi, Derna, al-Bayda and Tobruk if he had been allowed to pursue his March Blitzkrieg toward the eastern cities that had defied him”. Berube derides this claim as “bizarre”, and indeed it would be if this were actually the argument that any serious party had made. But the argument for intervention was not merely that Gaddafi could potentially have “killed or imprisoned large numbers of dissidents”. As Roberts notes, that’s the inevitable end result of just about any failed armed rebellion, and imprisonment and killing was probably an unavoidable endgame no matter how matters in Libya were resolved. The victorious rebels, after all, have imprisoned or extrajudicially killed a large number of people on the pro-Gaddafi side, including Gaddafi himself; and that’s not to speak of the direct civilian casualties from the actual bombing campaign.

But Berube elides all of this, by implying that those who questioned the predictions of a humanitarian apocalypse were absurdly denying the possibility of any retaliation at all against the rebels. Thus, while acknowledging that in principle “the Libya intervention could be subjected to cost/benefit analyses and consequentialist objections”, he proceeds to pile up the human costs of non-intervention, while leaving his side of the ledger clear of any of the deaths that resulted from the decision to intervene. This allows him to portray the pro-intervention side as the sole owners of facts and common sense, before launching into his real subject: the perfidy and moral obtuseness of the war’s critics.

He finds plenty of juicy targets, because there was indeed some dodgy argumentation on the anti-war side. There was, as there always is, a certain amount of vulgar anti-imperialism that insisted that opposing NATO meant glorifying Gaddafi and dismissing the legitimacy of his opposition. There was, too, an occasional tendency to obsess over the war’s legality, even though law in an international context is always rather capricious and dependent on great-power politics. And Berube is clever enough to anticipate the objections to his highlighting of such arguments:

Those who believe that there should be no enemies to one’s left are fond of accusing me of “hippie punching,” as if, like Presidents Obama and Clinton, I am attacking straw men to my left in order to lay claim to the reasonable, vital center; those who know that I am not attacking straw persons are wont to claim instead that I am criticizing fringe figures who have no impact whatsoever on public debate in the United States. And it is true: on the subject of Libya the usual fringe figures behaved precisely as The Left At War depicts the Manichean Left. Alexander Cockburn, James Petras, Robert Fisk, John Pilger—all of them still fighting Vietnam, stranded for decades on a remote ideological island with no way of contacting any contemporary geopolitical reality whatsoever—weighed in with the usual denunciations of US imperialism and predictions that Libya would be carved up for its oil. And about the doughty soi-disant anti-imperialists who, in the mode of Hugo Chavez, doubled down on the delusion that Qaddafi is a legitimate and benevolent ruler harassed by the forces of imperialism, there really is nothing to say, for there can be nothing more damning than their own words.

For the record: yes indeed, Berube is engaged in “hippie punching”, attacking straw men, and selectively nutpicking the worst arguments on the anti-war side. And to what end? As with so much liberal imperialism, it seems that the purpose here is not so much to provide an empirical and political case for the war, as it is to confirm the superior moral sensibility of the warmongers, who are committed to high-minded internationalist ideals while their opponents are mired in knee-jerk anti-Americanism. The conflation of good intentions with good results bedevils liberal politics in all kinds of ways, and nowhere is it more damaging than in the realm of international politics, where morally pure allegiances are difficult to find.

Berube complains that “for what I call the Manichean Left, opposition to U.S. policy is precisely an opposition to entities: all we need to know, on that left, is that the U.S. is involved.” To this, he counterposes his rigorous case-by-case evaluation of specific actions, which is indifferent to the identity of the parties involved. But while this is a sound principle in the abstract, Roberts’ exposé of the shaky Libya dossier demonstrates why it is so dangerous in practice. Given our limited ability to evaluate, in the moment, the hyperbolic claims made by governments on the warpath, a systematic bias against supporting intervention is the only way to counter-balance what would otherwise be a bias in favor of accepting propaganda at face value, and thereby supporting war in every case. Even if the outcome in Libya turns out to be an exceptional best-case scenario—a real democracy, independent of foreign manipulation—this is insufficient reason to substantially revise a general-purpose anti-interventionist prior. And even if the outcome of the NATO campaign has not played out as badly as some anti-war voices predicted, the details of that campaign’s marketing only tend to confirm the danger of making confident statements of martial righteousness while enveloped in the fog of war.

The Conservative Leftist and the Radical Longshoreman

September 29th, 2011  |  Published in Political Economy, Politics, Work, xkcd.com/386

Via Yglesias, I find to my dismay that some alleged progressives at Lawyers, Guns, and Money are exulting in the failure of supermarkets to replace human checkers with automatic checking machines. Like Yglesias, I don’t think bemoaning automation in this way is helpful. He gives the empirical argument that slow productivity growth hasn’t historically been good for workers, and that too-low wages are probably one of the things impeding the adoption of productivity-enhancing technology. The second is an argument that I made before, specifically using the supermarket checkout machine as an example. But now I want to make a broader ideological point about this.

These two posts, the one from Erik Loomis and especially the follow up by “DJW”, contain two distinct arguments for the anti-machine position. To take the second and less compelling one first, there’s the claim that maybe being a supermarket checker isn’t so alienating and menial after all:

Secondly, this line of thinking makes some assumptions that I’m sympathetic to, but can’t entirely get on board with. First, the assumption that we can theorize about jobs in this concrete and certain way and determine that supermarket checker (and I assume many much worse jobs) are ‘menial’ and we should hope for a world in which humans don’t do that sort of thing. I like my early Marx, too, but I can’t get on board with this. I simply don’t think we have the tools to do this kind of universal theorizing about the essential nature and value of this or that job. People have long found meaning and dignity in all manner of repetitive and uncreative work. Others have approached the world of work with indifference; they work to pay the bills and finding meaning and value in other aspects of their lives. Marx, of course, chalked this sort of thing up to alienation and false consciousness and the like, but I’m more of pluralist about what a dignified and fully human life looks like. At a minimum, I don’t have all the answers, and have a healthy distrust of letting my own tastes and proclivities get in the way of respecting other’s ability to determine what they value about their lives on their own terms.

This is reminiscent of my exchange with Reihan Salam from a couple of months ago, and I don’t find this argument any more compelling from the left than I did from the right. I’ll just note that by framing the issue in this way, DJW totally effaces the real nature of work in a capitalist society. To pretend that the existence of many people who work as supermarket checkers reflects their “ability to determine what they value about their lives on their own terms” is to ignore the reality that for the worker without independent wealth, the only “choice” is between obtaining the wage they need to get by, or starving in the streets. You don’t see a lot of trust-fund kids or lottery winners working as supermarket checkers.

Moreover, there’s no principled rationale here. If the menial jobs we have are good, then why wouldn’t more would be better? we could solve the jobs deficit through a campaign against technology throughout the economy. This would also have the effect of lowering our material standard of living, but to this way of thinking that’s presumably a good thing.

I doubt the LGM bloggers really endorse such a program, though. As I said, I don’t think the argument is based on an ideological principle at all; rather, it’s the result of a pragmatic calculation:

First, let’s be clear that this is some deeply utopian stuff. This makes third party advocates seem downright practical. We’ve had a modern capitalist economy for quite some time now, in many different countries, and I can’t think of any that have come anywhere close to this, or made it a meaningful priority. Of course some unpleasant and meaningful jobs have been largely eliminated, and more probably will be in the future, but when this does occur it is almost always with indifference or actual malice toward the eliminated worker, rather than compassion. And while the overall mix of jobs in a society may improve for the better over time, it’s virtually never the case that workers in eliminated fields end up better off. If the elimination takes place in a moment of robust employment they may be OK, but for the most part those who lose the jobs are going to be worse off for a good long while. Even in the most robust and humane welfare states the modern world has developed, unemployment is generally associated with a decline in living standards, sense of self-worth, and so on.

Leave aside for a moment that this argument sort of implies that no-one should ever lose their job, which is inconsistent with the assumption of a capitalist economy; I’m willing to chalk that up to a sloppy formulation. The general principle being expressed here isn’t unreasonable or irrational: sometimes it’s better to help a few workers here and now than to run off after utopian pie in the sky, and we should be wary of the slippery logic that it’s OK to impose hardship on a few workers for the sake of the greater good. This is the same thinking that’s at work in defenses of licensing cartels that protect some workers at the expense of consumers and excluded laborers, and in attacks on investments in urban infrastructure that may have the effect of pricing some people out of their neighborhoods. These aren’t silly things to be worried about–if you can’t achieve anything positive, you should at least do no harm. And as the left has gotten weaker and weaker, such arguments have gotten more and more plausible. But we’ve reached a point where some people seem to be opposed to any policy at all that imposes a burden on any group of workers.

It’s an attitude that bespeaks an intensely conservative and defensive politics, and one which has internalized the great right-wing motif of the past several decades: there is no alternative to neoliberal capitalism. To Loomis and DJW, the possibility of a historically novel progressive alternative is literally unthinkable. For them, the only choices are a) an intensification of neoliberalism’s logic of inequality and joblessness; or b) a desperate struggle to hold on to the remnants of the 20th century Keynesian social compromise. Given those options, I’d take the second choice as well.

But I don’t think those are the only options, and moreover I don’t think that in the long run this position is really as pragmatic as it seems. It commits the left to an endlessly reactive, defensive struggle over a shrinking commons, while leaving us bereft of any compelling vision to offer people. And trying to fight off automation won’t be a matter of a few rear-guard skirmishes, but of all-out societal-scale war: see Farhad Manjoo’s ongoing series on the pervasive effect of robotization throughout all sectors of the economy.

That isn’t to say that I’m always opposed to defensive struggles–sometimes that’s the best you can do, and sometimes winning a small human-scale victory is worth compromising our broader vision a bit. But the LGM authors go a good deal farther than this: Erik Loomis’s original post didn’t say that de-automation was a good second best outcome, he said that he was “very glad” to see the self-checkout machines disappear, because they are “a calculated plan by grocery stores to employ less people.” DJW, meanwhile, straightforwardly embraces Luddism. I’m taken aback by a worldview that would make such defensiveness and conservatism central to its ideology. That’s not what the left has been about at its best–and as Corey Robin explains, it’s not even what right-wing “conservatism” was ever about.

Left out of consideration in these anti-technology arguments is any conception that increased productivity could be used to benefit the masses rather than the elite. The decoupling of rising productivity from rising fortunes for workers is, after all, only a phenomenon of the past 30 years. In the period prior to that, rising productivity went with rising wages: this was the heart of the postwar Keynesian social compact. And in the period prior to that, rising productivity went along with a shortening of the working day, through a long series of bitter struggles. It’s odd, and a bit sad, to see the LGM bloggers ahistorically naturalizing the left’s weakness, especially given that at least one of the authors I’m discussing is a college professor. I thought it was the professors who were supposed remind us of history, and to cling to impractical utopianism. But to find an antidote to the timid conservatism of the professor, we have to turn to the harebrained utopian dreaming of….dockworkers.

Containerization and automation have drastically decreased the need for human labor in America’s ports, as anyone who’s watched Season 2 of The Wire knows. But among some longeshoreman the response wasn’t to resist the machines, but to accept them–with conditions:

In modern times, far more than other unions, the longshoreman have used technological change to their advantage. In 1960, the West Coast longshoremen agreed to far-reaching automation that replaced inefficient break-bulk cargo, which relied on hooks to move the cargo, with containerized cargo, which relies on cranes. In accepting automation, the union recognized that productivity would soar and the number of longshoremen needed would plunge; there are now 10,500 West Coast longshoremen, down from 100,000 in the 1950’s.

In exchange, the union received an unusual promise: port operators pledged to share the fruits of the new automation. Management promised all longshoremen a guaranteed level of pay, even if there was not work for everyone. Management also promised to share the wealth.

Bill DiFazio wrote a book about some longshoremen like this in New York, and he makes a case against the view that without wage labor, our lives will lose meanings and we will drift into dissipation. He found instead that the lives of the longshoremen were greatly enriched, as they were freed from dangerous labor and became more deeply involved with their neighborhoods and their families.

Basically, I think this is the deal we need to strike throughout the economy: automation (and relatedly, free trade) in exchange for compensating the displaced. However, the longshoremen were only able to achieve this victory because they occupy an unusual strategic choke-point in the economy. Shutting down the ports can cripple wide swaths of business, and this gives dockworkers a kind of negotiating leverage that isn’t available to, say, supermarket checkers. Which is why I think that the demand to compensate workers for technological change now has to be fought out politically and electorally, at the level of the state, rather than in the individual workplace. That’s the essence of my argument for the Basic Income: just like the dockworkers’ agreement, it ensures a level of pay whether or not there is work for everyone, only it generalizes the principle to encompass the whole economy.

You can dismiss that as utopianism if you like. Certainly the call for work reduction and the decoupling of income from employment has been made many times through the generations, from Paul LaFargue to André Gorz to Stanley Aronowitz. But the left does itself no favors by remaining in a defensive crouch, clinging to nostalgia for a political order that was rooted in a very different political economy–and which wasn’t even all that great to begin with. Despite what William F. Buckley once said, the right didn’t win by “standing athwart history yelling ‘stop!'”–and on issues where they did do that, like racial segregation and gay marriage, they have lost or are losing. The modern right provided an offensive strategy and a grand vision of what was wrong with the society that existed and what had to be done to turn it into something better: one market under god.

Their dream of unrestrained capitalism, of course, turned out to be a nightmarish fraud. But that’s all the more reason to demand something new and better, rather than merely clinging to what’s left of the old.

Redistribution Under Neoliberalism

August 8th, 2011  |  Published in Data, Political Economy, Politics, Social Science, Statistical Graphics, xkcd.com/386

Last week, Seth Ackerman wrote a Jacobin blog post in which he gave us a snarky attack on the record of “left neo-liberalism” in the United Kingdom. Basically, he showed that while New Labour managed to reduce poverty somewhat with cash transfer programs, the progress was meager and could not be sustained. Since the programs were financed out of a series of asset bubbles, the UK has seen poverty go back up again with the recent crisis.

I don’t have much quarrel with this account, but I’m not sure it can bear the weight of the argument that Seth wants to put on it. He suggests that the UK experience is a refutation of the general strategy of progressive neoliberalism, which Freddie DeBoer felicitously dubbed “globalize-grow-give”:

First, you embrace the standard globalization model of reduced or eliminated tariff walls, large free trade agreements such as NAFTA or CAFTA, deregulation, and general trade liberalization. This encourages international trade and the exporting of jobs from highly-regulated, fairly well compensated, high worker standard of living places like the United States to the cheap labor, low regulation, low worker standard of living places like China or Indonesia. This spurs international economic growth in both the exporting and importing countries. Here at home, higher growth results in higher tax revenues which can then be redistributed from those at the top of the income distribution (who have benefited from the globalized trade regime) to those at the bottom of the income distribution (who have been hurt by the globalized trade regime that undercuts their wages and exports their jobs).

I think that if you want to really criticize this view, you need to look beyond the UK, which is neither a very generous nor a particularly well-designed welfare state. As it happens, my day job involves analyzing cross-national income data, so I’m going to perpetrate some social science on y’all.

The way I read the “globalize-grow-give” critique, you can extract an empirical claim about how the income distribution should look in a G-G-G economy. The distribution of income before taxes and transfers will become increasingly unequal due to deregulation and globalization, but the distribution after taxes and transfers are accounted for will not become vastly more unequal because government is compensating for the inequality in the private market.

To test this, I did some simple calculations, following other researchers who have done similar things. Using data from the Luxembourg Income Study, I calculated the Gini coefficient, a standard measure of inequality, for several different countries. I calculated two different Ginis:

  • The Gini of market income. Market income is defined here as income from wages, pensions, self-employment and property. This is income before any taxes or transfers are accounted for.
  • The Gini of disposable income. This is the income that people actually have to spend, after taxes are deducted and any transfers are added in. (For more details about the variables, see the postscript).

Unfortunately, the difficulty of harmonizing cross-national data means that the numbers I have access to are a bit out of date–specifically, they end before the current crisis period. I still think we can learn something useful from them, however. The way G-G-G neoliberalism is supposed to work, the Gini of market income should go up but the Gini of disposable income should not–or at least should rise more slowly. We can think of the difference between market income inequality and disposable income inequality as a rough measure of the amount of redistribution done by the state.

So here’s what things look like in the UK:

Income Inequality in the UK

This figure basically supports Seth’s argument. Market income inequality has gone way up in the last few decades, but disposable income inequality has gone up by a lot as well. The state is doing a bit more redistribution than it used to, but not enough to make up for the rise in private-market inequality. If you look at the United States, the situation is even worse, as the state has done essentially nothing to counter rising inequality in market income:

Income Inequality in the USA

The question, though, is whether it has to be like this. Let’s put the UK alongside another rich European economy, Germany:

Income Inequality in the UK and Germany

Here we see something very interesting. Before you take taxes and transfers into account, the rise in inequality in Germany looks very similar to what happened in the UK–indeed, the two countries converge to almost the same value by 2005. But disposable income inequality has stayed flat in Germany, because the German state has used taxes and transfers to counteract rising inequality.

Every good social democrat loves the Nordic model, so let’s finish off with a look at Sweden:

Income Inequality in Sweden

Here the story is a bit different–both market income and disposable income inequality have remained pretty flat, although both have risen a bit. The important thing to note here is that even in the most socialist of welfare states, market income inequality is very high, nearly as high as it is in the UK or US. The fact that Sweden is one of the least unequal countries on earth has to do almost entirely with taxes and transfers.

So what can we conclude from all this? Let me be clear that I don’t think this is a knock-down argument in favor of “globalize-grow-give” as a political model. But I think the best argument against the G-G-G model is not that it’s economically impossible or dependent on asset bubbles. Rather, I’d point us back to the political arguments enumerated by me, Henry Farrell, and Cosma Shalizi among others. What makes Sweden and Germany different is not that their economies are different from those in the US and UK (although they are), but that they have different political environments, featuring things like a hegemonic Social Democratic party in Sweden and a strong labor movement in Germany.

So if left-neoliberalism is to be a workable political agenda rather than the motto of useful idiots for the “globalize-grow-keep” agenda of the right-wing neoliberals, it has to either make its peace with the sources of working-class power that currently exist, or else come up with workable models of what might replace them.

[Postscript for income inequality nerds only: the income variables are equivalized for household size using the square root of the number of persons in the household as the equivalence scale. The variables are then topcoded at ten times the equivalized mean and bottom-coded at 1 percent of the equivalized mean.

Note that the transfers included in disposable income are only cash transfers and “near-cash” benefits (like food stamps), not in-kind services like health care. So you could argue that this data actually understates the extent of redistribution.

If you’d like to look at the data, including a bunch of countries I didn’t include in the post, it’s here. For help interpreting the country codes, go here]

To be a productive labourer is not a piece of luck, but a misfortune

July 29th, 2011  |  Published in Politics, Work, xkcd.com/386

Reihan Salam is by far the most interesting and creative thinker associated with the National Review. (To clarify: that’s a pretty low bar, but I actually think he’s interesting and creative in general.) So when I saw that he had responded by my post on cheap labor and technological stagnation, I hoped to find some arguments that would challenge my assumptions. Instead, I found this:

I’d argue that fulfilling and valuable work is work that provides individuals with “obstacles that arise naturally and authentically in their path,” to draw on Richard Robb.

It is fairly easy to construct a coherent story for Frase’s notion that supermarket checkout work isn’t sufficiently stimulating to merit survival. Unlike skilled trade work, it doesn’t involve the kind of problem-solving that allows us to stretch our capacities. Rather, it is about offering a service in a friendly and efficient way, which can be taxing but, over time, not necessarily very edifying. I definitely get that idea, and I certainly wouldn’t suggest that we should devote resources to saving supermarket checkout work per se.

But supermarket checkout work needs to be soon through a different lens. If I’m a young adult who had a child at a young age, my fulfillment could plausibly derive from the sense that I am contributing to the well-being of my child by engaging in wage work. The wage work in question might not be terribly stimulating, but to grin and bear it is to overcome an obstacle that arises naturally and authentically in my path to achieving some level of economic self-sufficiency. Granted, I might benefit from a host of work supports, including wage subsidies, etc., but I (rightly) see myself as making a contribution. It is not the work itself that is fulfilling. It is the fact that I am doing authentic work — not make-work designed to teach me a lesson about the value of, say, convincing taxpayers that I deserve my daily bread, but work that someone will voluntarily pay me a wage to do — in support of a vision of myself as a provider that is fulfilling.

I actually have to hand it to him for coming right out and making the “wage labor is good for you” argument, which is a much tougher sell than the usual “we need wage labor or nobody will do any work” argument, and hence is typically delivered in an elided and concealed fashion. But the notion of “authentic” work that’s being deployed here is one I have a hard time wrapping my head around, although I recognize it as a central element of right-wing metaphysics.

It’s easy to glorify the dignity of wage labor when you have a stimulating job at the National Review, but this line of argument rapidly loses its plausibility when you get to the low-wage jobs I was talking about. A lousy supermarket job that you only have because your time is valued at less than the time of an automatic checkout machine is somehow more authentic because someone “voluntarily” paid for it. Presumably it’s more authentic than being a firefighter, since they have to “convince the taxpayers” that they deserve to be paid. And Salam must not think his own job is all that authentic, since the National Review is sustained by rich donors and could never survive if it had to get by on subscription revenue. I could go on about this, but I already did in my review of “Undercover Boss” and my first essay for Jacobin.

As for the specific nature of supermarket work, this comment on the original NR post says it more powerfully than I could. It starts out: “Having worked as a supermarket checker, I can tell you that no one I worked with got anything out of the job other than a paycheck, and the rates of depression and substance abuse among my colleagues were staggering.”

And as a friend put it to me earlier today: “As if the unemployed are unfamiliar with natural and authentic obstacles”. But look, if you do need some “obstacles that arise naturally and authentically in your path”, try training for a marathon or something. Or I can recommend some excellent video games.

The authenticity stuff aside, we also have the patronizing suggestion that a young parent needs to feel that they are “contributing to the well-being of [their] child by engaging in wage work.” As though they aren’t already contributing to that well-being by taking care of a child, which requires a lot more skill and engagement than bagging groceries. Even without the childcare angle, though, maybe people would be less likely to feel they needed to take a crappy job in order to contribute to society, if people like Reihan Salam weren’t running around telling them exactly that.

To be fair, Salam does acknowledge that rather than stigmatizing the unemployed and people who do non-waged labor, we could try to break down the fetishization of waged work that gives it such “nonmaterial and psychological importance”. And I don’t dispute his point that this is a hard thing to do. But he doesn’t even seem interested in it. Instead, at the end of the post, he lays out his hopes for what’s to come: “In my scenario, the number of ‘working poor’ will likely increase”, and “servants and nannies will be the jobs of the future”:

This raises the question of what will happen to those trapped in the low end of the labor market. Recently, the cultural critic Annalee Newitz offered a provocative hypothesis: “We may return to arrangements that look a lot like what people had over a century ago,” Newitz writes. As more skilled women enter the workforce, and as the labor market position of millions of less-skilled workers deteriorate, we’ll see more servants and nannies in middle-class homes.

This “back to the 19th Century” vision is a scenario that has occurred to me as well, but I certainly never thought of it as a desirable end point. But hey, if the right thinks that’s the best thing they have to offer, they are welcome to make that their platform.

My question for Reihan Salam, though, is this. If National Review laid you off tomorrow, would you rather collect unemployment or go bag groceries because it would allow you to feel you were doing “authentic work” and had “overcome an obstacle that arises naturally and authentically in your path”? Maybe the answer would really be the latter, but I suspect for most people it wouldn’t be.

Anti-Star Trek Revisited: A Reply to Robin Hanson

July 21st, 2011  |  Published in anti-Star Trek, xkcd.com/386

[Update: A commenter informs me that Hanson actually does believe in intellectual property for utilitarian rather than moral reasons, so my apologies if I’ve misrepresented him on that point. It was totally unclear from the post I was replying to, but I should have done some more poking around before I made that assumption.]

One of the best things about having something you wrote go flying around the Internet for a few days is that you get lots of feedback and ideas from interesting people with whom you’d normally never interact. This is the promise of what Brad DeLong called the “invisible college”, and I must say I’m really enjoying it. It’s kind of like getting peer reviewed for a journal article, except that the volume and quality of reaction I’ve gotten to Anti-Star Trek has been superior to the actual peer reviews I’ve received.

Most people who took the time to write about my post were inclined to view it favorably, but of course the real fun is being told that you’re wrong on the Internet. Robin Hanson at Overcoming Bias actually tried to defend Anti-Star Trek as a superior arrangement to actual Star Trek. I think Hanson is some kind of libertarian, and the tone of the post is pretty snide and condescending, but whatever; I’ve said nastier things about libertarians. It’s still worth addressing what he says. His argument has three separable components: the first misses the point, the second is irrelevant, and the third reveals an important moral disagreement about what makes for a good society.

First, Hanson wants to say that really, my portrayal of Star Trek as a communist society is wrong. There are still some resource constraints, we see market exchange (although I’d argue it’s mostly what Erik Olin Wright likes to call “capitalism between consenting adults), and so on:

Now it should be noted that Star Trek fiction has many cases of people using money and trading. Even setting that aside, replicators need both matter and energy as input, and neither could ever be in infinite supply. So even an ideal “communist” Star Trek must enforce limited budgets of access to such things. Lawyers and guardians would need to adjudicate and enforce such limits.

True enough, but this was a thought experiment. I was trying to extract the element of the Star Trek universe that is both unusual and resonant with present-day trends, and that’s the existence of post-scarcity technologies. Allocating scarce goods and resources is an old and not as interesting problem, so I wrote that stuff out of the thought experiment.

Second, Hanson claims that I’m glorifying the government/military hierarchy of Starfleet over the hierarchies that would be produced by the intellectual property-based regime of Anti-Star Trek.

After all, this might lead to unequal “classes,” where some own more than others. This even though Star Fleet displays lots of hierarchy and inequality, and spends large budgets that must come at the expense of private budgets.

The far future seems to have put Frase in full flaming far mode, declaring his undying allegience to a core ideal: he prefers the inequality that comes from a government hierarchy, over inequality that comes from voluntary trade. Sigh.

But the structure of Starfleet has nothing to do with the underlying economic basis of the Star Trek universe. The fact that people can engage in the kind of space adventure we see on the show is something made possible by abundance and an underlying communist social structure, but it isn’t a necessary consequence of it. And the fact that Starfleet is structured like a Naval hierarchy is justified by the existence of hostile alien races–which, again, isn’t the aspect of the Star Trek universe that I was interested in for this thought experiment.

Finally, Hanson wants to suggest that it is just and right that people should be rewarded monetarily for the intellectual property they create.

In both the Star Trek and Anti-Star Trek societies, the main source of long term value seems to be the accumulation of better designs. Yet Frase (and apparently Yglesias) is horrified to imagine that the people who contribute this main value might get paid for their contributions. After all, this might lead to unequal “classes,” where some own more than others.

Of course, he doesn’t really mean people should be “paid for their contributions”. That would just mean rewarding people when they come up with a good idea. Anti-Star Trek, however, adds the further requirement that the original creator should get paid every time someone makes use of their idea.

It’s hard to see why you would approve of this, unless you justify it on the grounds of morality rather than economic efficiency. In this regard, it’s interesting to contrast Hanson’s vitriol with Matt Yglesias’s favorable reaction to what I wrote. Both Hanson and Yglesias approve of a maximalist neoliberal vision of markets and commodification in a way that I don’t, although Yglesias’s politics are much closer to mine. But Yglesias approaches intellectual property in basically utilitarian terms: he views the artificial monopoly and scarcity mandated by IP law as justified if and only if it leads to more creation of knowledge and culture. This is also the view of IP that’s enshrined in the constitution: the point of copyright is “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

Hanson, in contrast, seems to be taking a position that I often see on the libertarian fringes: he thinks that people have some kind of inherent right to be showered with riches if they come up with a popular idea. One response to this is the Yglesias/utilitarian one: this is silly because it doesn’t lead to maximizing overall human well-being, and it’s clear that lots of valuable new ideas get created even in the absence of IP rights.

My response is somewhat different: I don’t think it even makes sense to obsess about who did or didn’t “create” some specific idea. The progress of human culture is a cumulative process–“standing on the shoulders of giants”, Stigler’s Law, and so on. Moreover, all creators are dependent on living in a very specific type of society–technologically advanced, low levels of violence, high levels of education, and so on–that facilitates their work. And the ubiquity of simultaneous invention suggests to me that there is little rationale behind the desire to anoint some specific person as “the creator” of a good idea. Even from a more libertarian perspective, I don’t see the point of rewarding people for coming up with ideas. As Levine and Boldrin like to argue, the trick is to successfully implement and popularize an idea. Or as the Mark Zuckerberg character says to the Winklevii in that Facebook movie: if you had invented Facebook, then you would have invented Facebook.

Nevertheless, I think Hanson’s response is worth paying attention to, because the transition to a world like Anti-Star Trek probably requires a cultural shift from the utilitarian Yglesias perspective on intellectual property to the Hansonian moralistic view, in which copying is viewed as morally equivalent to theft. Yglesias noted in a follow-up that some people objected to Anti-Star Trek on the grounds that under current IP law, things like replicator patterns might not be covered, and in any case copyrights don’t last forever. But as he then notes, laws can change, especially when powerful rentier interests want them to change. And it will be much easier to bring about the transition to eternal, all-encompassing intellectual property protections if people stop thinking of IP as a necessary evil to encourage innovation, and start thinking of it as a basic human right of “creators”.