New Jacobin Essay, and an addendum on Bahrain

March 15th, 2011  |  Published in Imperialism, Politics

I have a [new essay for Jacobin Magazine](http://jacobinmag.com/archive/issue2/frase.html), about what the Arab revolutions of 2011 mean for anti-imperialist politics in the United States. I'd encourage my handful of readers on this site to take your click traffic over that way--in spite of my involvement, Bhaskar Sunkara has put together a great group of writers at [Jacobin](http://jacobinmag.com).

By writing about such fast-moving events, I ensured that my contribution would be outdated as soon as it appeared. One thing I had to completely skip over in the essay was the events in Bahrain, but it's worth talking about because it's an important counterpoint to the cases I discussed in that piece. I focused on Egypt and Libya, and I argued that American leverage in those two cases was considerably less than most people--left and right--seemed to think. But Bahrain is the opposite sort of case, and it's pretty clear that Obama has more influence over the situation there than anyone in the American elite wants to admit. A comparison between the way Bahrain and Libya are being discussed in the press illuminates the contradictions and hypocrisies that characterize debates about foreign policy in the United States.

To summarize: Saudi Arabia has [sent troops](http://www.nytimes.com/2011/03/15/world/middleeast/15bahrain.html) into Bahrain to put down the escalating protests against the monarchy there. Bahrain's ruling family is Sunni Muslim, while the majority of the population is Shia--and the Saudis are clearly afraid that the uprising might give their own Shiite minority some ideas. But by sending in troops, the Saudis could make the whole situation much more volatile and deadly--already, the opposition is [denouncing the move](http://www.reuters.com/article/2011/03/14/bahrain-protests-opposition-idUSLDE72D16I20110314) as an "occupation" and a "declaration of war". The U.S. government, meanwhile, ["does not consider"](http://www.reuters.com/article/2011/03/14/bahrain-usa-invasion-idUSWNA351920110314) the Saudi action to be an invasion.

The United States is deeply implicated in all of this--there is a [major U.S. naval base](http://www.cusnc.navy.mil/) in Bahrain, while the Saudis are close American allies and [loyal customers](http://www.washingtonpost.com/wp-dyn/content/article/2010/01/30/AR2010013001477.html) of our military-industrial complex. And as Brookings Doha Center analyst Shadi Hamid [remarked on Twitter](http://twitter.com/#!/shadihamid/status/47311059555598336), the Saudis wouldn't have gone into Bahrain without U.S. approval, or at least "lack of a red light". Former British diplomat Craig Murray gets even more specific, [reporting](http://craigmurray.org.uk/archives/2011/03/the-invasion-of-bahrain/) that "A senior diplomat in a western mission to the UN in New York . . . has told me for sure that Hillary Clinton agreed to the cross-border use of troops to crush democracy in the Gulf, as a quid pro quo for the Arab League calling for Western intervention in Libya."

All of which goes to show that when people ask what the Obama administration can do to help the uprisings in the Middle East, the sensible response is that they should start by ceasing to actively prop up the dictators there. Backing away from the Saudi and Bahraini monarchies would be far easier and less bloody than, say, invading Libya. This is the fundamental reason why I don't think we can take the pronouncements of liberal humanitarian imperialists like [Jackson Diehl](http://www.washingtonpost.com/opinions/will-gaddafi-reverse-the-tide-of-the-arab-spring/2011/03/10/AB6GK6T_story.html) at face value when they insist that American military intervention is the only solution to authoritarian regimes or global humanitarian crises. Take the aptly-named Anne-Marie Slaughter, who [took to the New York Times](http://www.nytimes.com/2011/03/14/opinion/14slaughter.html) to condemn Obama for not taking unilateral military action in Libya. To advocate such a dangerous and deadly course of action while ignoring the American role in fomenting human rights abuses in the Arab world is not just ill-considered, it's fundamentally dishonest. But as Matt Yglesias [remarks](http://yglesias.thinkprogress.org/2011/03/the-slippery-slope-of-unilateralism/), "there's definitely a set of people in the United States who seem to want to help suffering people in the developing world if and only if that can be accomplished by killing other people in the developing world."

As I say in the Jacobin essay, I think that the decline of American imperial power (and more proximately, the lesson of Iraq) is making people like Diehl and Slaughter less dangerous, simply because their deranged schemes are less likely to be realized. But these silly debates about no-fly zones and humanitarian invasions do still serve to distract attention from American complicity in atrocities like what's happening in Bahrain. So long as the liberal warmongers can get a hearing in the New York Times and the Washington Post, there's still a need for a forthrightly anti-imperialist left that can make the argument that the best thing for the liberation movements around the world would be an American government and military that does *less* to interfere in their affairs, rather than more.

The Right to the Unknown

February 11th, 2011  |  Published in Politics  |  2 Comments

Tahrir Square, Feb 11 2001 (Image by Matthew Cassell)

The most moving and insightful comment I've seen on the Egyptian revolution of 2011 appeared, appropriately enough, on Twitter. Last night, after Hosni Mubarak punked the nation with his non-resignation speech, Alaa Abd El Fattah said this:

don't know what will happen. pre #Jan25 I could predict tomorrow will be like today and yesterday, we revolt to gain the right to unkown

I only know what WE WANT and what I WILL DO

In ordinary times, tomorrow is like today, and the future is predictable--not just in Egypt but everywhere. The essence of revolutionary moments is not any particular set of demands or changes, but their very unknowability. And that sense that the future is unwritten--the "right to the unknown"--is a rare and thrilling and sometimes terrifying kind of human freedom.

Today, with Hosni Mubarak finally departing the scene, many are warning that the Egyptian army may yet perpetuate the old regime and snatch away the dreams of the protesters in Liberation Square. And they may yet do so. But those who issue such warnings are the same people who predicted that the Tunisian revolution would not spread, that the Egyptian protests would not grow, and that Hosni Mubarak would not give up power. They are the people who predicted that today would be like yesterday--but it isn't, not anymore.

If you predict that the improbable won't happen, you will usually be proven right. But eventually you will be wrong, and when you are you will be proven very, very wrong. Wall Street bankers made money for years by betting against improbable events--until the improbable occurred, the housing market collapsed, and we were plunged into a financial crisis. In the same way, professional Egyptologists padded their wallets and their reputations by predicting, year after year, that Mubarak could not fall.

But today none of that matters. Whatever else they may yet achieve, the Egyptian people have ensured that their future is unknown, for at least a few more months.

Idiocracy’s Theory of the Future

January 12th, 2011  |  Published in Art and Literature, Political Economy  |  5 Comments

Mike Judge's [*Idiocracy*](http://www.imdb.com/title/tt0387808/) is a pretty smart and funny movie, which touches on some themes I've recently [written about](http://www.peterfrase.com/2010/12/anti-star-trek-a-theory-of-posterity/). But it's also a widely underappreciated and misunderstood film. Perhaps that's because one of the people who seems to misunderstand it the most is its own writer and director, Mike Judge.

The basic premise of the film, as per IMDB:

> Private Joe Bauers, the definition of "average American", is selected by the Pentagon to be the guinea pig for a top-secret hibernation program. Forgotten, he awakes 500 years in the future. He discovers a society so incredibly dumbed-down that he's easily the most intelligent person alive.

The rest of the film is an extended satirical riff on this idiotic future society. Its residents are both unbelievably crude and endlessly capable of falling for consumerist marketing bullshit. With regard to the former: Starbucks now offers hand jobs, everyone regards reading and thinking as activities for "fags", and one of the film's set pieces involves a #1 hit film called "Ass", consisting of nothing but the image described in the title. In a climactic scene Joe Bauers (played by Luke Wilson) addresses Congress, wistfully declaring that:

> there was a time in this country, a long time ago, when reading wasn't just for fags and neither was writing. People wrote books and movies, movies that had stories so you cared whose ass it was and why it was farting, and I believe that time can come again!

Meanwhile, everyone in the future mindlessly repeats advertising slogans as though they were a scientific consensus. The threat of famine looms because everyone insists on watering crops with a noxious energy drink called Brawndo, while insisting that "it's got electrolytes . . . they're what plants crave!" It's left to Joe Bauers to convince his moronic fellow humans of the virtues of old fashioned water.

This sounds like the sort of thing your average anti-corporate liberal might enjoy, although I'd note that liberal yuppies are [hardly](http://www.youtube.com/watch?v=7L2fsubA2-c) [immune](http://www.youtube.com/watch?v=FL7yD-0pqZg) to this sort of irrational marketing hype. But the movie made a lot of people uncomfortable, and it has been mostly forgotten since its 2006 release. In part, that's because of the generally elitist "most people are idiots" vibe that Judge evokes. But more specifically, I think it's because of the film's overtly misanthropic, eugenics-minded opening:

This [reaction from Manohla Dargis](http://movies.nytimes.com/2009/09/04/movies/04extract.html) is typical:

> "Idiocracy" expresses the kind of fear lampooned, consciously or not, in the old joke about revolting masses. (Messenger: "The masses are revolting!" King: "You’re telling me!") It opens with a comparison between trailer-trash types, with low I.Q.’s, who freely propagate, and smarty-pants types who fret about conceiving, using every excuse to find the perfect time to have children. In the end the low I.Q.-ers overrun the intelligent, who die off, which is funny if you think that only certain kinds of people should reproduce. An equal-opportunity offender, Mr. Judge can wield satire like a sledgehammer, so it’s no surprise that he doesn’t bother with the complexities of class and representation in a bit about the dire consequences of a birth dearth.

This bit of the movie is every bit as offensive and reactionary as Dargis suggests at is, and its stupidity is pretty much summed up in this [xkcd cartoon](http://xkcd.com/603/). But the tragedy of the whole movie is that *this premise is totally unnecessary*. It's completely possible to explain the emergence of the *Idiocracy* future based on sociological and political-economic themes that have nothing to do with genetic determinism, while leaving the rest of the movie mostly unchanged.

To me, one of the most interesting and suggestive bits of the movie is the following exchange toward the end of [the story](http://www.pages.drexel.edu/~ina22/splaylib/Screenplay-Idiocracy.htm):

Joe and the Cabinet Members are gathered around a VIDEO PHONE
talking to the CEO OF RAUNCBO, who's in his office, panicking.
We hear people rioting outside his building and occasionally
bottles and debris hit his window.

RAUNCHO CEO
What happened?!

JOE
Ah... Well, we switched the crops to
water.

RAUNCHO CEO
I'm not talking about that.
(points to a computer
screen, freaked out)
Our sales are all like, down. Way
down! The stock went to zero and the
computer did auto-layoff on
everybody!

ATTORNEY GENERAL
Shit! Almost everyone in the country
works for Rauncho!

RAUNCHO CEO
Not anymore! And the computer said
everyone owes Rauncho money!
Everyone's bank account is zero now!

What does this exchange tell us about the film's implicit [theory of posterity](http://www.peterfrase.com/2010/12/social-science-fiction/)?

1. The future economy is highly automated, to the point that even the management of companies is done automatically by a computer.
2. People nonetheless need money to pay for things, which they get by working for Brawndo (which is called "Rauncho" in this earlier version of the screenplay). It's not clear what they do for their money, but it can't be very important in light of their obvious stupidity and the above-noted automation.
3. The continued stability of this society is therefore dependent on the existence of a business which does not actually improve anyone's material standard of living--indeed, it is *decreasing* it by killing all the crops.

The theory of posterity the grounds *Idiocracy*, it seems to me, is a close cousin of [Anti-Star Trek](http://www.peterfrase.com/2010/12/anti-star-trek-a-theory-of-posterity/): an economy that needs humans as consumers, but makes them mostly superfluous as producers.

So how does this explain the fact that everyone is such a moron? Well, consider what would happen to education in a society like this. If the productive economy is all run by computers, then there's no need to teach people how to make things, or how anything actually works. On the contrary, it would be economically beneficial to encourage delusions about the magical properties of consumer products, the better to ensure that people will continue to drink Brawndo rather than water. In other words, there is no economic incentive to produce intelligence. We can imagine that at some point in the past, legitimate institutions of higher education were dismantled (perhaps by the people Diane Ravitch discusses [here](http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/05/17/diane-ravitch-on-being-wrong.aspx)), and replaced by things like [Costco Law School](http://www.imdb.com/title/tt0387808/quotes?qt0427921).

I really wish someone would make a movie that's as funny as *Idiocracy* without falling back on such lazy right-wing premises. On the other hand, it's intriguing that Judge could end up making a film that mostly functions as a radical critique even though it's based on a reactionary assumption. *Idiocracy* does illuminate a dangerous trend in contemporary capitalism--one that has nothing to do with the wrong people having babies, and everything to do with a system that increasingly reproduces itself by producing stupidity in the population. The movie's only mistake is to think that our genes can save us from stupidity, when it seems far more defensible to say that "intelligence" is some combination of socially nurtured ability and [statistical myth](http://cscs.umich.edu/~crshalizi/weblog/523.html).

Translating English into English

January 4th, 2011  |  Published in Art and Literature, Sociology  |  3 Comments

So it seems there's going to be a [censored version](http://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/45645-upcoming-newsouth-huck-finn-eliminates-the-n-word.html) of *The Adventures of Huckleberry Finn* that replaces the word "nigger" with "slave". My initial reaction was to agree with [Doug Mataconis](http://www.outsidethebeltway.com/publisher-to-delete-racially-insensitive-words-from-huckleberry-finn-tom-sawyer/) that this is both offensive and stupid. It struck me as being of a piece with the general tendency of white Americans to deal with the existence of racism by ignoring it rather than talking about it.

And I guess I still feel that way, but after reading [Kevin Drum's take](http://motherjones.com/kevin-drum/2011/01/bowdlerizing-huck) I'm more sympathetic to Alan Gribben, the Twain scholar responsible for the new censored version. Gribben says that because of the extreme visceral reactions people have to the word "nigger", most teachers today feel they can't get away with assigning *Huck Finn* to their students, even if they'd really like to. So the choice was to either consent to this bowdlerization or else let the book gradually disappear from our culture altogether. I'm still a bit torn about it--and I think that the predicament of the teachers Gribben talked to *is* indicative of precisely the cowardly attitudes toward race that I described above. But I'm willing to accept that censoring the book was the least-bad response to this unfortunate state of affairs.

However, what most caught my attention about Kevin Drum's post on the controversy was this:

> In fact, given the difference in the level of offensiveness of the word *nigger* in 2010 vs. 1884, it's entirely possible that in 2010 the bowdlerized version more closely resembles the intended emotional impact of the book than the original version does. Twain may have meant to shock, but I don't think he ever intended for the word to completely swamp the reader's emotional reaction to the book. Today, though, that's exactly what it does.

That got me thinking a more general thought I've often had about our relationship to old writings: it's a shame that we neglect to re-translate older works into English merely because they were originally written in English. Languages change, and our reactions to words and formulations change. This is obvious when you read something like [Chaucer](http://www.librarius.com/cantales.htm), but it's true to a more subtle degree of more recent writings. There is a pretty good chance that something written in the 19th century won't mean the same thing to us that it meant to its contemporary readers. Thus it would make sense to re-translate *Huckleberry Finn* into modern language, in the same way we periodically get new translations of Homer or Dante or Thomas Mann. This is a point that applies equally well to non-fiction and social theory: in some ways, English-speaking sociologists are lucky that our canonical trio of classical theorists--Marx, Weber, and Durkheim--all wrote in another language. The [most recent translation](http://www.amazon.com/Capital-Critique-Political-Economy-Classics/dp/0140445684) of *Capital* is eminently more readable than the [older ones](http://books.google.com/books?id=fmTUJewBeToC&dq=marx%20capital&pg=PA363#v=onepage&q&f=false)--and I know I could have used a modern English translation of Talcott Parsons when I was studying contemporary theory.

Now, one might respond to this by saying that writing loses much in translation, and that some things just aren't the same unless you read them in the original un-translated form. And that's probably true. But it would still be good to establish the "English-to-English translation" as a legitimate category, since it would give us a better way of understanding things like the new altered version of *Huck Finn*. You would have the original Huck and the "new English translation" of Huck existing side by side; students would read the translation in high school, but perhaps they would be introduced to the original in college. We could debate whether a new translation was good or bad without getting into fruitless arguments over whether one should ever alter a classic book. And maybe it would help us all develop a more historical and contextual understanding of language and be less susceptible to [the arbitrary domination of prescriptive grammarians](http://www.peterfrase.com/2009/09/never-been-in-a-language-riot/).

Obligatory Google Ngram Post

December 20th, 2010  |  Published in Data, Social Science, Statistical Graphics, Time  |  1 Comment

It appears that everyone with a presence on the Internet is obligated to post some kind of riff on the [amazing Google Ngram Viewer](http://ngrams.googlelabs.com/info). Via Henry Farrell, I see that Daniel Little has attempted to [perpetrate some social science](http://understandingsociety.blogspot.com/2010/12/new-tool-for-intellectual-history.html), which made me think that perhaps while I'm at it, I can post something that actually relates to my dissertation research for a change. Hence, this:

Instances of the phrases "shorter hours" and "higher wages" in the Google ngram viewer.

Click for a bigger version, but the gist is that the red line indicates the phrase "higher wages", and the blue line is "shorter hours". Higher wages have a head start, with hours not really appearing on the agenda until the late 19th century. That's a bit later than I expected, but it's generally consistent with what I know about hours-related labor struggle in the 19th century.

The 20th century is the more interesting part of the graph in any case. For a while, it seems that discussion of wages and hours moves together. They rise in the period of ferment after World War I, and again during the depression. Both decline during World War II, which is unsurprising--both wage and hour demands were subordinated to the mobilization for war. But then after the war, the spike in mentions of "higher wages" greatly outpaces mentions of "shorter hours"--the latter has only a small spike, and thereafter the phrase enters a secular decline right through to the present.

Interest in higher wages appears to experience a modest revival in the 1970's, corresponding to the beginnings of the era of wage stagnation that we are still living in. But for the first time, there is no corresponding increase in discussion of shorter hours. This is again not really surprising, since the disappearance of work-time reduction from labor's agenda as been widely remarked upon. But it's still pretty interesting to see such evidence of it in the written corpus.

Anti-Star Trek: A Theory of Posterity

December 14th, 2010  |  Published in anti-Star Trek, Art and Literature, Political Economy  |  71 Comments

In the process of trying to pull together some thoughts on intellectual property, zero marginal-cost goods, immaterial labor, and the incipient transition to a rentier form of capitalism, I've been working out a thought experiment: a possible future society I call *anti-Star Trek*. Consider this a stab at a [theory of posterity](http://www.peterfrase.com/2010/12/social-science-fiction/).

One of the intriguing things about the world of Star Trek, as Gene Roddenberry presented it in *The Next Generation* and subsequent series, is that it appears to be, in essence, a communist society. There is no money, everyone has access to whatever resources they need, and no-one is required to work. Liberated from the need to engage in wage labor for survival, people are free to get in spaceships and go flying around the galaxy for edification and adventure. Aliens who still believe in hoarding money and material acquisitions, like the Ferengi, are viewed as barbaric anachronisms.

The technical condition of possibility for this society is comprised of of two basic components. The first is the replicator, a technology that can [make instant copies of any object with no input of human labor](http://radar.oreilly.com/2010/12/diy-fabrication-hits-a-new-pri.html). The second is an apparently unlimited supply of free energy, due to anti-matter reactions or dilithium crystals or whatever. It is, in sum, a society that has overcome scarcity.

Anti-Star Trek takes these same technological premises: replicators, free energy, and a post-scarcity economy. But it casts them in a different set of social relations. Anti-Star Trek is an attempt to answer the following question:

* Given the material abundance made possible by the replicator, how would it be possible to maintain a system based on money, profit, and class power?

Economists [like to say](http://delong.typepad.com/sdj/2010/12/what-do-econ-1-students-need-to-remember-most-from-the-course.html) that capitalist market economies work optimally when they are used to allocate scarce goods. So how to maintain capitalism in a world where scarcity can be largely overcome? What follows is some steps toward an answer to this question.

Like industrial capitalism, the economy of anti-Star Trek rests on a specific state-enforced regime of property relations. However, the kind of property that is central to anti-Star Trek is not physical but *intellectual* property, as codified legally in the patent and copyright system. While contemporary defenders of intellectual property like to speak of it as though it is broadly analogous to other kinds of property, it is actually based on a quite different principle. As the (libertarian) economists Michele Boldrin and David K. Levine [point out](http://levine.sscnet.ucla.edu/general/intellectual/coffee.htm):

> Intellectual property law is not about your right to control your copy of your idea - this is a right that . . . does not need a great deal of protection. What intellectual property law is really about is about your right to control my copy of your idea. This is not a right ordinarily or automatically granted to the owners of other types of property. If I produce a cup of coffee, I have the right to choose whether or not to sell it to you or drink it myself. But my property right is not an automatic right both to sell you the cup of coffee and to tell you how to drink it.

This is the quality of intellectual property law that provides an economic foundation for anti-Star Trek: the ability to tell others how to use copies of an idea that you "own". In order to get access to a replicator, you have to buy one from a company that licenses you the right to use a replicator. (Someone can't give you a replicator or make one with their replicator, because that would violate their license). What's more, every time you make something with the replicator, you also need to pay a licensing fee to whoever owns the rights to that particular thing. So if the Captain Jean-Luc Picard of anti-Star Trek wanted ["tea, Earl Grey, hot"](http://www.youtube.com/watch?v=R2IJdfxWtPM), he would have to pay the company that has copyrighted the replicator pattern for hot Earl Grey tea. (Presumably some other company owns the rights to cold tea.)

This solves the problem of how to maintain for-profit capitalist enterprise, at least on the surface. Anyone who tries to supply their needs from their replicator without paying the copyright cartels would become an outlaw, like today's online file-sharers. But if everyone is constantly being forced to pay out money in licensing fees, then they need some way of *earning* money, and this brings up a new problem. With replicators around, there's no need for human labor in any kind of physical production. So what kind of jobs would exist in this economy? Here are a few possibilities.

1. *The creative class*. There will be a need for people to come up with new things to replicate, or new variations on old things, which can then be copyrighted and used as the basis for future licensing revenue. But this is never going to be a very large source of jobs, because the labor required to create a pattern that can be infinitely replicated is orders of magnitude less than the labor required in a physical production process in which the same object is made over and over again. What's more, we can see in today's world that lots of people [will create and innovate on their own](http://cyber.law.harvard.edu/wealth_of_networks/Main_Page), without being paid for it. The capitalists of anti-Star Trek would probably find it more economical to simply pick through the ranks of unpaid creators, find new ideas that seem promising, and then buy out the creators and turn the idea into the firm's intellectual property.

2. *Lawyers*. In a world where the economy is based on intellectual property, companies will constantly be [suing each other](http://yro.slashdot.org/story/10/12/10/154202/Worlds-Largest-Patent-Troll-Fires-First-Salvo) for alleged infringements of each others' copyrights and patents. This will provide employment for some significant fraction of the population, but again it's hard to see this being enough to sustain an entire economy. Particularly because of a theme that will arise again in the next couple of points: just about anything can, in principle, be automated. It's easy to imagine big intellectual property firms coming up with procedures for mass-filing lawsuits that rely on fewer and fewer human lawyers. On the other hand, perhaps an equilibrium will arise where every individual needs to keep a lawyer on retainer, because they can't afford the cost of auto-lawyer software but they must still fight off lawsuits from firms attempting to [win big damages for alleged infringment](http://news.cnet.com/8301-1023_3-20021735-93.html).

3. *Marketers*. As time goes on, the list of possible things you can replicate will only continue to grow, but people's money to buy licenses--and their time to enjoy the things they replicate--will not grow fast enough to keep up. The biggest threat to any given company's profits will not be the cost of labor or raw materials--since they don't need much or any of those--but rather the prospect that the licenses they own will lose out in popularity to those of competitors. So there will be an unending and cut-throat competition to market one company's intellectual properties as superior to the competition's: Coke over Pepsi, Ford over Toyota, and so on. This should keep a small army employed in advertizing and marketing. But once again, beware the spectre of automation: advances in data mining, machine learning and artificial intelligence may lessen the amount of human labor required even in these fields.

4. *Guard labor*. The term "Guard Labor" is [used by the economists Bowles and Jayadev](http://ideas.repec.org/p/ums/papers/2004-15.html) to refer to:

> The efforts of the monitors, guards, and military personnel . . . directed not toward production, but toward the enforcement of claims arising from exchanges and the pursuit or prevention of unilateral transfers of property ownership.

In other words, guard labor is the labor required in any society with great inequalities of wealth and power, in order to keep the poor and powerless from taking a share back from the rich and powerful. Since the whole point of *anti-Star Trek* is to maintain such inequalities even when they appear economically superfluous, there will obviously still be a great need for guard labor. And the additional burden of enforcing intellectual property restrictions will increase demand for such labor, since it requires careful monitoring of what was once considered private behavior. Once again, however, automation looms: robot police, anyone?

These, it seems to me, would be the main source of employment in the world of anti-Star Trek. It seems implausible, however, that this would be sufficient--the society would probably be subject to a persistent trend toward under-employment. This is particularly true given that all the sectors except (arguably) the first would be subject to pressures toward labor-saving technological innovation. What's more, there is also another way for private companies to avoid employing workers for some of these tasks: turn them into activities that people will find pleasurable, and will thus do for free on their own time. Firms like Google are already experimenting with such strategies. The computer scientist Luis von Ahn has specialized in developing ["games with a purpose"](www.cs.cmu.edu/~biglou/ieee-gwap.pdf): applications that present themselves to end users as enjoyable diversions, but which also perform a useful computational task. One of von Ahn's games asked users to identify objects in photos, and the data was then fed back into a database that was used for searching images. It doesn't take much imagination to see how this line of research could lead toward the world of Orson Scott Card's novel [*Ender's Game*](http://en.wikipedia.org/wiki/Ender's_Game), in which children remotely fight an interstellar war through what they think are video games.

Thus it seems that the main problem confronting the society of anti-Star Trek is the problem of effective demand: that is, how to ensure that people are able to earn enough money to be able to pay the licensing fees on which private profit depends. Of course, this isn't so different from the problem that confronted industrial capitalism, but it becomes more severe as human labor is increasingly squeezed out of the system, and human beings become superfluous as elements of *production*, even as they remain necessary as *consumers*.

Ultimately, even capitalist self-interest will require some redistribution of wealth downward in order to support demand. Society reaches a state in which, as the late André Gorz [put it](http://books.google.com/books?id=xRQOcJWXRwEC&pg=PA90&lpg=PA90&dq=gorz+%22means+of+payment%22&source=bl&ots=RjZtvZ7QxG&sig=eWqhIlEDIfuXfcIaAFkh5EHtiAw&hl=en&ei=fJYHTeHIOIK88gb_3v2hBw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBMQ6AEwAA#v=onepage&q&f=false), "the distribution of means of payment must correspond to the volume of wealth socially produced and not to the volume of work performed". This is particularly true--indeed, it is *necessarily* true--of a world based on intellectual property *rents* rather than on value based on labor-time.

But here the class of rentier-capitalists will confront a collective action problem. In principle, it would be possible to sustain the system by taxing the profits of profitable firms and redistributing the money back to consumers--possibly as [a no-strings attached guaranteed income](http://www.aei.org/book/846), and possibly in return for performing some kind of [meaningless](http://www.bbc.co.uk/news/uk-politics-11728546) [make-work](http://movieclips.com/4aBM-office-space-movie-did-you-get-the-memo/). But even if redistribution is desirable from the standpoint of the class as a whole, any individual company or rich person will be tempted to free-ride on the payments of others, and will therefore resist efforts to impose a redistributive tax. Of course, the government could also simply print money to give to the working class, but the resulting inflation would just be an indirect form of redistribution and would also be resisted. Finally, there is the option of funding consumption through consumer indebtedness--but this merely delays the demand crisis rather than resolving it, as residents of the present know all too well.

This all sets the stage for ongoing stagnation and crisis in the world of anti-Star Trek. And then, of course, there are the masses. Would the power of ideology be strong enough to induce people to accept the state of affairs I've described? Or would people start to ask why the wealth of knowledge and culture was being enclosed within restrictive laws, when "another world is possible" beyond the regime of artificial scarcity?

Marx’s Theory of Alien Nation

December 10th, 2010  |  Published in Art and Literature, Social Science, Socialism  |  1 Comment

Charles Stross hits another one out of the park today. The post attempts to explain the widespread sentiment that the masses are politically powerless: "Voting doesn't change anything — the politicians always win." Stross advances the thesis that we have been disempowered by the rise of the corporation: first legally, when corporations were recognized as persons, and then politically, when said corporations captured the democratic process through overt and subtle forms of corruption and bribery.

Playing off the notion of corporations as "persons", Stross portrays the corporation as a "hive organism" which does not share human priorities; corporations are "non-human entities with non-human goals", which can "co-opt" CEOs or politicians by rewarding them financially. The punchline to the argument is that:

In short, we are living in the aftermath of an alien invasion.

I like this argument a lot, but it seems to me that it's less an argument about the corporation as such than an argument about capitalism. Indeed, Marx spoke about capitalism in remarkably similar terms. He notes that the underlying dynamic of capitalism is M-C-M': the use of money to produce and circulate commodities solely for the purpose of accumulating more capital. Money itself is the agent here, not any person. This abstract relationship is more fundamental than the the relations between actual people--capitalists and workers--whose actions are dictated by the exigencies of capital accumulation. From Capital, chapter four:

The circulation of money as capital is, on the contrary, an end in itself, for the expansion of value takes place only within this constantly renewed movement. The circulation of capital has therefore no limits.

As the conscious representative of this movement, the possessor of money becomes a capitalist. His person, or rather his pocket, is the point from which the money starts and to which it returns. The expansion of value, which is the objective basis or main-spring of the circulation M-C-M, becomes his subjective aim, and it is only in so far as the appropriation of ever more and more wealth in the abstract becomes the sole motive of his operations, that he functions as a capitalist, that is, as capital personified and endowed with consciousness and a will.

According to Marx, the alien invasion hasn't just co-opted its human agents but actually corrupted and colonized their minds, so that they come to see the needs of capital as their own needs. Thus the workers find themselvs exploited and alienated, not fundamentally by capitalists but by the alien force, capital, which uses the workers only to reproduce itself. From chapter 23:

The labourer therefore constantly produces material, objective wealth, but in the form of capital, of an alien power that dominates and exploits him; and the capitalist as constantly produces labour-power, but in the form of a subjective source of wealth, separated from the objects in and by which it can alone be realised; in short he produces the labourer, but as a wage labourer. This incessant reproduction, this perpetuation of the labourer, is the sine quâ non of capitalist production.

This, incidentally, is why Maoists like The Matrix.

Moishe Postone makes much of this line of argument in his brilliant Time, Labor, and Social Domination. He emphasizes (p. 30) the point that:

In Marx's analysis, social domination in capitalism does not, on its most fundamental level, consist in the domination of people by other people, but in the domination of people by abstract social structures that people themselves constitute.

Therefore,

the form of social domination that characterizes capitalism is not ultimately a function of private property, of the ownership by the capitalists of the surplus product and the means of production; rather, it is grounded in the value form of wealth itself, a form of social wealth that confronts living labor (the workers) as a structurally alien and dominant power.

Since the "aliens" are of our own making, the proper science fiction allegory isn't an extraterrestrial invasion but a robot takeover, like the Matrix or Terminator movies. But close enough.

So in light of my last post, does this make Capital an early work of science fiction? Or does it make contemporary science fiction the leading edge of Marxism? Both, I'd like to think.

Social Science Fiction

December 8th, 2010  |  Published in Art and Literature, Social Science  |  5 Comments

Henry Farrell has a nice discussion of some recent debates about steampunk novels. He refers to Charles Stross's complaint that much steampunk is so infatuated with gadgets and elites that it willfully turns away from the misery and exploitation that characterized real Victorian capitalism. He also approvingly notes Cosma Shalizi's argument that "The Singularity has happened; we call it 'the industrial revolution'". Farrell builds on this point by noting that "one of the skeins one can trace back through modern [science fiction] is a vein of sociological rather than scientific speculation, in which events happening to individual characters serve as a means to capture arguments about what is happening to society as a whole". The interesting thing about the 19th century, then, is that it is a period of rapid social transformation, and SF is an attempt to understand the implications of such rapid change. In a similar vein, Patrick Neilsen Hayden quotes Nietzsche: "The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw."

This relates to some of my own long-gestating speculations about the relationship between science fiction and social science. My argument is that both fields can be understood as projects that attempt to understand empirical facts and lived experience as something which is shaped by abstract--and not directly perceptible--structural forces. But whereas social science attempts to derive generalities about society from concrete observations, SF derives possible concrete events from the premise of certain sociological generalities. Note that this definition makes no reference to the future or the past: science fiction can be about the past, like steampunk, but it is the working out of an alternative past, which branches off from our own timeline according to clearly differences in social structure and technology. If social science is concerned with constructing a model (whether quantitative or qualitative) on the basis of data, then we can think of a science-fictional world by analogy to a prediction from an existing model, such as a fitted statistical model: any particular point prediction reflects both the invariant properties of the model's parameters and the uncertainty and random variation that makes individual cases idiosyncratic.

The following are a few semi-related musings on this theme.

I. The Philosophy of Posterity

One kind of sociologically-driven science fiction is the working out of what I will call a theory of posterity. Posterity, here, is meant to imply the reverse of history. And a theory of posterity, in turn, is an inversion of the logic of a theory of history, or of the logic of social science more generally.

History is a speculative enterprise in which the goal is to construct an abstract conception of society, derived from its concrete manifestations. That is, given recorded history, the historian attempts to discern the large, invisible social forces that generated these events. It is a process of constructing a story about the past, or as Benjamin puts it:

To articulate what is past does not mean to recognize “how it really was.” It means to take control of a memory....

Or consider Benjamin's famous image of the "angel of history":

His face is turned towards the past. Where we see the appearance of a chain of events, he sees one single catastrophe, which unceasingly piles rubble on top of rubble and hurls it before his feet. He would like to pause for a moment so fair [verweilen: a reference to Goethe’s Faust], to awaken the dead and to piece together what has been smashed. But a storm is blowing from Paradise, it has caught itself up in his wings and is so strong that the Angel can no longer close them. The storm drives him irresistibly into the future, to which his back is turned, while the rubble-heap before him grows sky-high.

One way to read this is that the pile of rubble is the concrete accumulation  of historical events, while the storm represents the social forces--especially capitalism, in Benjamin's reading--which drive the logic of events.

But consider what lies behind the angel of history: the future. We cannot know what, concretely, will happen in the future. But we know about the social forces--the storm--which are pushing us inexorably into that future. Herein lies the distinction between the study of history and the study of posterity: a theory of posterity is an attempt to turn the angel of history around, and to tell us what it sees.

Where the historian takes empirical data and historical events and uses them to build up a theory of social structure, a theory of posterity begins with existing social forces and structures, and derives possible concrete futures from them. The social scientist must pick through the collection of empirical details--whether in the form of archives, ethnographic narratives, or census datasets--and decide which are relevant to constructing a general theory, and which are merely accidental and contingent features of the record. Likewise, constructing an understanding of the future requires sorting through all the ideas and broad trends and institutions that exist today, in order to determine which will have important implications for later events, and which will be transient and inconsequential.

Because it must construct the particular out of the general, the study of posterity is most effectively manifested in fiction, which excels in the portrayal of concrete detail, whereas the study of the past takes the form of social science, which is built to represent abstractions. Fictional futures are always preferable to those works of "futurism" which attempt to directly predict the future, obscuring the inherent uncertainty and contingency of that future, and thereby stultifying the reader. Science fiction is to futurism what social theory is to conspiracy theory: an altogether richer, more honest, and more humble enterprise. Or to put it another way, it is always more interesting to read an account that derives the general from the particular (social theory) or the particular from the general (science fiction), rather than attempting to go from the general to the general (futurism) or the particular to the particular (conspiracism).

Science fiction can be understood as a way of writing that adopts a certain general theory of posterity, one which gives a prominent role to science and technology, and then describes specific events that would be consistent with that theory. But that generalization conceals a great diversity of different understandings. And so to understand a work of speculative fiction, therefore, it helps to understand the author's theory of posterity.

II. Charles Stross: the Sigmoid Curve and Punctuated Equilibrium

The work of Charles Stross provides an illuminating case study. Much of his work deals with the near-future, and thus is centrally concerned with extrapolating current social trends in various directions. His most acclaimed novel, Accelerando, is an account of "the singularity": the moment when rapidly accelerating technological progress gives rise to incomprehensibly post-human intelligences.

Like most science fiction, Stross's theory of posterity begins from the interaction of social structure and technology. This is rather too simple a formulation, however, as it tends to imply a sort of technological determinism, where technical developments are considered to be a process that goes on outside of society, and affects it as an external force. Closer to the spirit of Stross--and most good SF--is the following from Marx:

Technology discloses man’s mode of dealing with Nature, the process of production by which he sustains his life, and thereby also lays bare the mode of formation of his social relations, and of the mental conceptions that flow from them.

This formulation, to which David Harvey is quite partial, reveals that technology is not an independent "thing" but rather an intersection of multiple human relationships--the interchange with nature, the process of production (and, we might add, reproduction), and culture.

Stross's theory of posterity places technology at the nexus of capital accumulation, consumer culture, and the state, in its function as the guarantor of contract and property rights. Thus in Accelerando, and also in books like Halting State, financial engineering, video games, hackers, intellectual property, and surveillance interact, and all of them push technology forward in particular directions. This is the mechanism by which Stross arrives at his ironic dystopia in which post-human intelligence takes the form of "sentient financial instruments" and "alien business models".

In surveying this vision, a question arises about the way technological development is portrayed in any theory of posterity. It has been a common trope in science fiction to simply take present-day trends and extrapolate them indefinitely into the future, without regard for any major change in the direction of development. Stross himself has observed this tendency: in the first half of the 20th century, the most rapid technological advances came in the area of transportation. People projected this into the future, and consequently science fiction of that era tended to produce things like flying cars, interstellar space travel, etc.

The implicit model of progress that gave rise to these visions was one in which technology develops according to an exponential curve:

expcurve

The exponential model of development also underpins many popular conceptions of the technological singularity, such as that of Ray Kurzweil. As we reach the rapidly upward-sloping part of the curve, the thinking goes, technological and social change becomes so rapid as to be unpredictable and unimaginable.

But Stross observes that the exponential model probably misconstrues what technical change really looks like. In the case of transportation, he notes that the historical pattern fits a different kind of function:

We can plot this increase in travel speed on a graph — better still, plot the increase in maximum possible speed — and it looks quite pretty; it's a classic sigmoid curve, initially rising slowly, then with the rate of change peaking between 1920 and 1950, before tapering off again after 1970. Today, the fastest vehicle ever built, NASA's New Horizons spacecraft, en route to Pluto, is moving at approximately 21 kilometres per second — only twice as fast as an Apollo spacecraft from the late-1960s. Forty-five years to double the maximum velocity; back in the 1930s it was happening in less than a decade.

Below is the sigmoid curve:

sigcurve

It might seem as though Accelerando, at least, isn't consistent with this model, since it looks more like a Kurzweil-style exponential singularity. But another way of looking at it is that the sigmoid curve simply plays out over a very long time scale: the middle parts of the book portray incredibly rapid changes, but by the end of the book the characters once again seem to be living in a world of fairly sedate development. This environment is investigated further in the followup Glasshouse, which pushes the singularity story perhaps as far as it will  go--to the point where it begins to lose all contact with the present, rendering further extrapolation impossible.

What's most interesting about the sigmoid-curve interpretation of technology, however, is what it implies about the interaction between different technological sectors over the course of history. Rather than ever-accelerating progress, the history of technology now looks to be characterized by something like what paleontologists call Punctuated Equilibrium: long periods of relative stasis, interspersed with brief spasms of rapid evolution. If history works this way, then projecting the future becomes far more difficult. The most important elements of the present mix of technologies are not necessarily the most prominent ones; it may be that some currently insignificant area will, in the near future, blow up to become the successor to the revolution in Information Technology.

In a recent speech, Stross futher elaborates on this framework as it relates to present trends in technology. He goes farther than in previous work in rejecting a key premise of the singularity, which is that the exponential growth in raw computing power will continue indefinitely:

I don't want to predict what we end up with in 2020 in terms of raw processing power; I'm chicken, and besides, I'm not a semiconductor designer. But while I'd be surprised if we didn't get an order of magnitude more performance out of our CPUs between now and then — maybe two — and an order of magnitude lower power consumption — I don't expect to see the performance improvements of the 1990s or early 2000s ever again. The steep part of the sigmoid growth curve is already behind us.

However, Stross notes that even as the acceleration in processor powers drops, we are seeing a distinct kind of development based on ubiqitous fast wireless Internet connections and portable computing devices like the iPhone. The consequence of this is to erode the distinction between the network and "real" world:

Welcome to a world where the internet has turned inside-out; instead of being something you visit inside a box with a coloured screen, it's draped all over the landscape around you, invisible until you put on a pair of glasses or pick up your always-on mobile phone. A phone which is to today's iPhone as a modern laptop is to an original Apple II; a device which always knows where you are, where your possessions are, and without which you are — literally — lost and forgetful.

This is, essentially, the world of Manfred Macx in the opening chapters of Accelerando.  It is incipient in the world of Halting State, and its further development will presumably be interrogated in that book's sequel, Rule 34.

III. William Gibson and the Technicians of Culture

William Gibson is another writer who has considered the near future, and his picture in Pattern Recognition and Spook Country maps out a consensus future rather similar to Stross's. In particular, the effacing of the boundary between the Internet and everyday life is ever-present in these books, right down to a particular device--the special glasses which project images onto the wearer's environment--that plays a central role for both writers.

Yet technology for Gibson is embedded in a different social matrix. The state and its bureaucracy are less present than in Stross; indeed, Gibson's work is redolent of 1990's style imaginings of the globalized world, after the withering of the nation-state. Capital, meanwhile, is ever-present, but its leading edge is quite different. Rather than IP cartels or financiers or game designers, the leading force in Gibson's world is the culture industry, and in particular advertizing and marketing.

This is in keeping with Gibson's general affinity for, and deep intuitive understanding of, the world of consumer commodities. Indeed, his books are less about technology than they are meditations on consumer culture and its objects; the loving way in which brands and products are described reveals Gibson's own infatuation with these commodities. Indeed, his instincts are so well tuned that an object at the center of Pattern Recognition turned out to be a premonition of an actual commodity.

This all leads logically to a theory of the future in which changes in society and technology are driven by elements of the culture industry: maverick ad executives, cool-hunters, former indie-rock stars and avant-garde artists all figure in the two recent works. Gibson maintains a conception of the high culture-low culture divide, and the complex interrelation between the two poles, which is lacking in Stross. The creation and re-creation of symbols and meaning is the central form of innovation in his stories.

Insofar as Gibson's recent writing is the working out of a social theory, its point of departure is Fredric Jameson's theorization of postmodern capitalist culture. Jameson observed back in the 1970's that one of the definitive characteristics of late capitalism was that "aesthetic production today has become integrated into commodity production generally". Gibson, like Stross and other science fiction writers, portrays the effects of rapid change in the technologies of production, but in this case it is the technologies of aesthetic production rather than the assembly line, transportation, or communication.

And it does indeed seems that cultural innovation and recombination has accelerated rapidly in the past few decades. But in light of Stross, the question becomes: are we reaching the top of the sigmoid curve? It sometimes seems that we are moving into a world where Capital is more an more concerned with extracting rents from the control of "intellectual property" rather than pushing toward any kind of historically progressive technological or even cultural innovation. But I will save the working out of that particular theory of posterity for another post.

Contagious Delegitimation

November 16th, 2010  |  Published in Politics

Felix Salmon has an interesting post in which he notes how the Federal Reserve is being politicized by the debate over Quantitative Easing 2: Electric Boogaloo:

Bernanke "was already making a high-stakes economic bet with QE2," he says, "and now it’s a political bet, as well". If QE2 doesn’t work — Sloan raises the specter that it "could imperil the dollar and our financial system" — then it’s not just the economy which will be harmed, but also the Fed’s long-term credibility and pre-eminence. In fact, the politics of QE2 are already hobbling the Fed’s freedom of movement . . .

It's worth recalling how the Fed found itself in this position. Most Keynesian economists believe that the first-best policy choice under present economic conditions is another stimulus bill. But congress will not pass such a bill. QE2 is an inferior second-best policy, which is being employed because the Fed is able to act independently without getting permission from politicians. But this in turn exposes the Fed to the kind of political battles that Salmon is describing.

Which makes me think that we're watching as the entire American system of government is destabilized by what we might call "contagious delegitimation". What happens is that one part of the government loses its ability to act in the face of a crisis, which causes it to lose its legitimacy. This puts pressure on other government institutions to take up the slack. But because of the inability to coordinate with other, now non-functional parts of the government, these institutions are less effective than they might be, and so they to lose their legitimacy. There is a feedback loop here, as a loss of legitimacy leads to a diminished capacity to act, which in turn further erodes legitimacy.

The legitimation crisis in the United States probably begins in the states. They are required to balance their budgets, and are thus forced to make pro-cyclical budget adjustments that deepen the recession. Moreover, in big states like New York and California, existing dysfunctions of the political system have accreted over time and made the government both ineffectual and illegitimate in the eyes of voters.

The federal government was able to step in and support the states to some extent, through programs like the extension of unemployment insurance and the stimulus bill. But congress, too, is increasingly unable to act, because Republicans have realized that their best chance of winning the Presidency in 2012 is to sow chaos and intransigently block all of the Democratic initiatives.

Which brings us back to the Fed. Kevin Drum suggests that the real impact of QE2 is likely to be relatively small no matter what, which means that Republicans will be able to spin the result in their favor and thereby delegitimate the Fed. So where does the political system go after that? The Supreme Court has been somewhat insulated from the wave of delegitimation, but that's partly because the Roberts court has been careful to promote conservative policy goals quietly, while paying false deference to precedent. If the court chooses to make a major political play--by ruling Obama's health care bill unconstitutional, for example--than it will be sucked into the vortex as well.

This is the backdrop against which the 2010 midterm elections ought to be understood. Control over congress seems to be whipping back and forth between the parties--not because either party can put a compelling vision before voters, but rather because the electorate delivers one no-confidence vote after another to parties which are both captured by narrow ruling class interests. As the political scientist Thomas Ferguson puts it, "The political system is disintegrating, probably heading toward a real breakdown of some sort."

And this is not to speak of what is happening internationally, where American legitimacy also appears to be decaying even though the U.S. remains economically and militarily central to the global order.

Inflation and unemployment

September 20th, 2010  |  Published in Politics

An interesting paragraph from James Surowiecki in the New Yorker, who observes that an increase in inflation would be good for employment growth:

Unfortunately, when the Fed meets this week, it’s unlikely to be talking up the merits of an inflation boost. Central bankers are congenitally obsessed with the dangers of inflation and are more concerned with stable prices than with lost jobs. Also, the Fed, by its nature, looks after the interests of lenders, for whom inflation is generally bad news. But there’s a more basic reason, too: people really, really hate inflation. In polls, voters regularly cite high prices as one of their biggest concerns, even when inflation is low. A 2001 study that looked at the “macroeconomics of happiness” found that higher inflation put a severe dent in how happy people reported themselves to be. The distaste for inflation is such that a 1996 study (titled, aptly, “Why Do People Dislike Inflation?”), by the Yale economist Robert Shiller, found that, in countries around the world, sizable majorities said that they would prefer low inflation and high unemployment to high inflation and low unemployment, even if that meant that millions of extra people would go without work.

If we stipulate that there is in fact a trade-off between inflation and unemployment, and that people really hate inflation, then a policy of minimizing inflation would appear to result in diffuse benefits and concentrated costs. That is, most people benefit (at least psychologically) from lower inflation, but at the cost of imposing severe hardship on the unemployed who would otherwise have jobs.

All things being equal, one would expect this to be an unlikely and unstable policy outcome. It's generally hard to impose a policy with concentrated costs and diffuse benefits, since the losers will tend to care about opposing the policy more than the winners will care about defending it. In this case, the losers (the unemployed) should mobilize to demand employment-generating policies, even at the cost of increasing inflation.

In this case, such a dynamic is probably blocked by the fact that the unemployed are a weak and disorganized interest group. Politicians are also able to avoid blame for unemployment by what Paul Pierson called the strategy of obfuscation, in which politicians act (or fail to act) in ways that conceal the relationship between their actions and policy outcomes. In this case, there are two stages of obfuscation. First, most people probably aren't very aware of the degree to which the Obama administration has refused to change Federal Reserve policy on inflation, chiefly through their desultory approach to getting Federal Reserve Board nominees confirmed. Second, people also don't really understand the relationship between inflation targeting and unemployment. As a result, there's no demand for a policy that promotes both inflation and employment.

In this context it's all the more important that the people most affected by bad policy get organized. Hence the importance of things like the Unemployed Workers Action Group.