Archive for 2010

December 20th, 2010  |  Published in Data, Social Science, Statistical Graphics, Time

It appears that everyone with a presence on the Internet is obligated to post some kind of riff on the amazing Google Ngram Viewer. Via Henry Farrell, I see that Daniel Little has attempted to perpetrate some social science, which made me think that perhaps while I'm at it, I can post something that actually relates to my dissertation research for a change. Hence, this:

Click for a bigger version, but the gist is that the red line indicates the phrase "higher wages", and the blue line is "shorter hours". Higher wages have a head start, with hours not really appearing on the agenda until the late 19th century. That's a bit later than I expected, but it's generally consistent with what I know about hours-related labor struggle in the 19th century.

The 20th century is the more interesting part of the graph in any case. For a while, it seems that discussion of wages and hours moves together. They rise in the period of ferment after World War I, and again during the depression. Both decline during World War II, which is unsurprising--both wage and hour demands were subordinated to the mobilization for war. But then after the war, the spike in mentions of "higher wages" greatly outpaces mentions of "shorter hours"--the latter has only a small spike, and thereafter the phrase enters a secular decline right through to the present.

Interest in higher wages appears to experience a modest revival in the 1970's, corresponding to the beginnings of the era of wage stagnation that we are still living in. But for the first time, there is no corresponding increase in discussion of shorter hours. This is again not really surprising, since the disappearance of work-time reduction from labor's agenda as been widely remarked upon. But it's still pretty interesting to see such evidence of it in the written corpus.

Anti-Star Trek: A Theory of Posterity

December 14th, 2010  |  Published in anti-Star Trek, Art and Literature, Political Economy

In the process of trying to pull together some thoughts on intellectual property, zero marginal-cost goods, immaterial labor, and the incipient transition to a rentier form of capitalism, I've been working out a thought experiment: a possible future society I call anti-Star Trek. Consider this a stab at a theory of posterity.

One of the intriguing things about the world of Star Trek, as Gene Roddenberry presented it in The Next Generation and subsequent series, is that it appears to be, in essence, a communist society. There is no money, everyone has access to whatever resources they need, and no-one is required to work. Liberated from the need to engage in wage labor for survival, people are free to get in spaceships and go flying around the galaxy for edification and adventure. Aliens who still believe in hoarding money and material acquisitions, like the Ferengi, are viewed as barbaric anachronisms.

The technical condition of possibility for this society is comprised of of two basic components. The first is the replicator, a technology that can make instant copies of any object with no input of human labor. The second is an apparently unlimited supply of free energy, due to anti-matter reactions or dilithium crystals or whatever. It is, in sum, a society that has overcome scarcity.

Anti-Star Trek takes these same technological premises: replicators, free energy, and a post-scarcity economy. But it casts them in a different set of social relations. Anti-Star Trek is an attempt to answer the following question:

• Given the material abundance made possible by the replicator, how would it be possible to maintain a system based on money, profit, and class power?

Economists like to say that capitalist market economies work optimally when they are used to allocate scarce goods. So how to maintain capitalism in a world where scarcity can be largely overcome? What follows is some steps toward an answer to this question.

Like industrial capitalism, the economy of anti-Star Trek rests on a specific state-enforced regime of property relations. However, the kind of property that is central to anti-Star Trek is not physical but intellectual property, as codified legally in the patent and copyright system. While contemporary defenders of intellectual property like to speak of it as though it is broadly analogous to other kinds of property, it is actually based on a quite different principle. As the (libertarian) economists Michele Boldrin and David K. Levine point out:

Intellectual property law is not about your right to control your copy of your idea - this is a right that . . . does not need a great deal of protection. What intellectual property law is really about is about your right to control my copy of your idea. This is not a right ordinarily or automatically granted to the owners of other types of property. If I produce a cup of coffee, I have the right to choose whether or not to sell it to you or drink it myself. But my property right is not an automatic right both to sell you the cup of coffee and to tell you how to drink it.

This is the quality of intellectual property law that provides an economic foundation for anti-Star Trek: the ability to tell others how to use copies of an idea that you "own". In order to get access to a replicator, you have to buy one from a company that licenses you the right to use a replicator. (Someone can't give you a replicator or make one with their replicator, because that would violate their license). What's more, every time you make something with the replicator, you also need to pay a licensing fee to whoever owns the rights to that particular thing. So if the Captain Jean-Luc Picard of anti-Star Trek wanted "tea, Earl Grey, hot", he would have to pay the company that has copyrighted the replicator pattern for hot Earl Grey tea. (Presumably some other company owns the rights to cold tea.)

This solves the problem of how to maintain for-profit capitalist enterprise, at least on the surface. Anyone who tries to supply their needs from their replicator without paying the copyright cartels would become an outlaw, like today's online file-sharers. But if everyone is constantly being forced to pay out money in licensing fees, then they need some way of earning money, and this brings up a new problem. With replicators around, there's no need for human labor in any kind of physical production. So what kind of jobs would exist in this economy? Here are a few possibilities.

1. The creative class. There will be a need for people to come up with new things to replicate, or new variations on old things, which can then be copyrighted and used as the basis for future licensing revenue. But this is never going to be a very large source of jobs, because the labor required to create a pattern that can be infinitely replicated is orders of magnitude less than the labor required in a physical production process in which the same object is made over and over again. What's more, we can see in today's world that lots of people will create and innovate on their own, without being paid for it. The capitalists of anti-Star Trek would probably find it more economical to simply pick through the ranks of unpaid creators, find new ideas that seem promising, and then buy out the creators and turn the idea into the firm's intellectual property.

2. Lawyers. In a world where the economy is based on intellectual property, companies will constantly be suing each other for alleged infringements of each others' copyrights and patents. This will provide employment for some significant fraction of the population, but again it's hard to see this being enough to sustain an entire economy. Particularly because of a theme that will arise again in the next couple of points: just about anything can, in principle, be automated. It's easy to imagine big intellectual property firms coming up with procedures for mass-filing lawsuits that rely on fewer and fewer human lawyers. On the other hand, perhaps an equilibrium will arise where every individual needs to keep a lawyer on retainer, because they can't afford the cost of auto-lawyer software but they must still fight off lawsuits from firms attempting to win big damages for alleged infringment.

3. Marketers. As time goes on, the list of possible things you can replicate will only continue to grow, but people's money to buy licenses--and their time to enjoy the things they replicate--will not grow fast enough to keep up. The biggest threat to any given company's profits will not be the cost of labor or raw materials--since they don't need much or any of those--but rather the prospect that the licenses they own will lose out in popularity to those of competitors. So there will be an unending and cut-throat competition to market one company's intellectual properties as superior to the competition's: Coke over Pepsi, Ford over Toyota, and so on. This should keep a small army employed in advertizing and marketing. But once again, beware the spectre of automation: advances in data mining, machine learning and artificial intelligence may lessen the amount of human labor required even in these fields.

4. Guard labor. The term "Guard Labor" is used by the economists Bowles and Jayadev to refer to:

The efforts of the monitors, guards, and military personnel . . . directed not toward production, but toward the enforcement of claims arising from exchanges and the pursuit or prevention of unilateral transfers of property ownership.

In other words, guard labor is the labor required in any society with great inequalities of wealth and power, in order to keep the poor and powerless from taking a share back from the rich and powerful. Since the whole point of anti-Star Trek is to maintain such inequalities even when they appear economically superfluous, there will obviously still be a great need for guard labor. And the additional burden of enforcing intellectual property restrictions will increase demand for such labor, since it requires careful monitoring of what was once considered private behavior. Once again, however, automation looms: robot police, anyone?

These, it seems to me, would be the main source of employment in the world of anti-Star Trek. It seems implausible, however, that this would be sufficient--the society would probably be subject to a persistent trend toward under-employment. This is particularly true given that all the sectors except (arguably) the first would be subject to pressures toward labor-saving technological innovation. What's more, there is also another way for private companies to avoid employing workers for some of these tasks: turn them into activities that people will find pleasurable, and will thus do for free on their own time. Firms like Google are already experimenting with such strategies. The computer scientist Luis von Ahn has specialized in developing "games with a purpose": applications that present themselves to end users as enjoyable diversions, but which also perform a useful computational task. One of von Ahn's games asked users to identify objects in photos, and the data was then fed back into a database that was used for searching images. It doesn't take much imagination to see how this line of research could lead toward the world of Orson Scott Card's novel Ender's Game, in which children remotely fight an interstellar war through what they think are video games.

Thus it seems that the main problem confronting the society of anti-Star Trek is the problem of effective demand: that is, how to ensure that people are able to earn enough money to be able to pay the licensing fees on which private profit depends. Of course, this isn't so different from the problem that confronted industrial capitalism, but it becomes more severe as human labor is increasingly squeezed out of the system, and human beings become superfluous as elements of production, even as they remain necessary as consumers.

Ultimately, even capitalist self-interest will require some redistribution of wealth downward in order to support demand. Society reaches a state in which, as the late André Gorz put it, "the distribution of means of payment must correspond to the volume of wealth socially produced and not to the volume of work performed". This is particularly true--indeed, it is necessarily true--of a world based on intellectual property rents rather than on value based on labor-time.

But here the class of rentier-capitalists will confront a collective action problem. In principle, it would be possible to sustain the system by taxing the profits of profitable firms and redistributing the money back to consumers--possibly as a no-strings attached guaranteed income, and possibly in return for performing some kind of meaningless make-work. But even if redistribution is desirable from the standpoint of the class as a whole, any individual company or rich person will be tempted to free-ride on the payments of others, and will therefore resist efforts to impose a redistributive tax. Of course, the government could also simply print money to give to the working class, but the resulting inflation would just be an indirect form of redistribution and would also be resisted. Finally, there is the option of funding consumption through consumer indebtedness--but this merely delays the demand crisis rather than resolving it, as residents of the present know all too well.

This all sets the stage for ongoing stagnation and crisis in the world of anti-Star Trek. And then, of course, there are the masses. Would the power of ideology be strong enough to induce people to accept the state of affairs I've described? Or would people start to ask why the wealth of knowledge and culture was being enclosed within restrictive laws, when "another world is possible" beyond the regime of artificial scarcity?

Marx’s Theory of Alien Nation

December 10th, 2010  |  Published in Art and Literature, Social Science, Socialism

Charles Stross hits another one out of the park today. The post attempts to explain the widespread sentiment that the masses are politically powerless: "Voting doesn't change anything — the politicians always win." Stross advances the thesis that we have been disempowered by the rise of the corporation: first legally, when corporations were recognized as persons, and then politically, when said corporations captured the democratic process through overt and subtle forms of corruption and bribery.

Playing off the notion of corporations as "persons", Stross portrays the corporation as a "hive organism" which does not share human priorities; corporations are "non-human entities with non-human goals", which can "co-opt" CEOs or politicians by rewarding them financially. The punchline to the argument is that:

In short, we are living in the aftermath of an alien invasion.

I like this argument a lot, but it seems to me that it's less an argument about the corporation as such than an argument about capitalism. Indeed, Marx spoke about capitalism in remarkably similar terms. He notes that the underlying dynamic of capitalism is M-C-M': the use of money to produce and circulate commodities solely for the purpose of accumulating more capital. Money itself is the agent here, not any person. This abstract relationship is more fundamental than the the relations between actual people--capitalists and workers--whose actions are dictated by the exigencies of capital accumulation. From Capital, chapter four:

The circulation of money as capital is, on the contrary, an end in itself, for the expansion of value takes place only within this constantly renewed movement. The circulation of capital has therefore no limits.

As the conscious representative of this movement, the possessor of money becomes a capitalist. His person, or rather his pocket, is the point from which the money starts and to which it returns. The expansion of value, which is the objective basis or main-spring of the circulation M-C-M, becomes his subjective aim, and it is only in so far as the appropriation of ever more and more wealth in the abstract becomes the sole motive of his operations, that he functions as a capitalist, that is, as capital personified and endowed with consciousness and a will.

According to Marx, the alien invasion hasn't just co-opted its human agents but actually corrupted and colonized their minds, so that they come to see the needs of capital as their own needs. Thus the workers find themselvs exploited and alienated, not fundamentally by capitalists but by the alien force, capital, which uses the workers only to reproduce itself. From chapter 23:

The labourer therefore constantly produces material, objective wealth, but in the form of capital, of an alien power that dominates and exploits him; and the capitalist as constantly produces labour-power, but in the form of a subjective source of wealth, separated from the objects in and by which it can alone be realised; in short he produces the labourer, but as a wage labourer. This incessant reproduction, this perpetuation of the labourer, is the sine quâ non of capitalist production.

This, incidentally, is why Maoists like The Matrix.

Moishe Postone makes much of this line of argument in his brilliant Time, Labor, and Social Domination. He emphasizes (p. 30) the point that:

In Marx's analysis, social domination in capitalism does not, on its most fundamental level, consist in the domination of people by other people, but in the domination of people by abstract social structures that people themselves constitute.

Therefore,

the form of social domination that characterizes capitalism is not ultimately a function of private property, of the ownership by the capitalists of the surplus product and the means of production; rather, it is grounded in the value form of wealth itself, a form of social wealth that confronts living labor (the workers) as a structurally alien and dominant power.

Since the "aliens" are of our own making, the proper science fiction allegory isn't an extraterrestrial invasion but a robot takeover, like the Matrix or Terminator movies. But close enough.

So in light of my last post, does this make Capital an early work of science fiction? Or does it make contemporary science fiction the leading edge of Marxism? Both, I'd like to think.

Social Science Fiction

December 8th, 2010  |  Published in Art and Literature, Social Science

Henry Farrell has a nice discussion of some recent debates about steampunk novels. He refers to Charles Stross's complaint that much steampunk is so infatuated with gadgets and elites that it willfully turns away from the misery and exploitation that characterized real Victorian capitalism. He also approvingly notes Cosma Shalizi's argument that "The Singularity has happened; we call it 'the industrial revolution'". Farrell builds on this point by noting that "one of the skeins one can trace back through modern [science fiction] is a vein of sociological rather than scientific speculation, in which events happening to individual characters serve as a means to capture arguments about what is happening to society as a whole". The interesting thing about the 19th century, then, is that it is a period of rapid social transformation, and SF is an attempt to understand the implications of such rapid change. In a similar vein, Patrick Neilsen Hayden quotes Nietzsche: "The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw."

This relates to some of my own long-gestating speculations about the relationship between science fiction and social science. My argument is that both fields can be understood as projects that attempt to understand empirical facts and lived experience as something which is shaped by abstract--and not directly perceptible--structural forces. But whereas social science attempts to derive generalities about society from concrete observations, SF derives possible concrete events from the premise of certain sociological generalities. Note that this definition makes no reference to the future or the past: science fiction can be about the past, like steampunk, but it is the working out of an alternative past, which branches off from our own timeline according to clearly differences in social structure and technology. If social science is concerned with constructing a model (whether quantitative or qualitative) on the basis of data, then we can think of a science-fictional world by analogy to a prediction from an existing model, such as a fitted statistical model: any particular point prediction reflects both the invariant properties of the model's parameters and the uncertainty and random variation that makes individual cases idiosyncratic.

The following are a few semi-related musings on this theme.

I. The Philosophy of Posterity

One kind of sociologically-driven science fiction is the working out of what I will call a theory of posterity. Posterity, here, is meant to imply the reverse of history. And a theory of posterity, in turn, is an inversion of the logic of a theory of history, or of the logic of social science more generally.

History is a speculative enterprise in which the goal is to construct an abstract conception of society, derived from its concrete manifestations. That is, given recorded history, the historian attempts to discern the large, invisible social forces that generated these events. It is a process of constructing a story about the past, or as Benjamin puts it:

To articulate what is past does not mean to recognize “how it really was.” It means to take control of a memory....

Or consider Benjamin's famous image of the "angel of history":

His face is turned towards the past. Where we see the appearance of a chain of events, he sees one single catastrophe, which unceasingly piles rubble on top of rubble and hurls it before his feet. He would like to pause for a moment so fair [verweilen: a reference to Goethe’s Faust], to awaken the dead and to piece together what has been smashed. But a storm is blowing from Paradise, it has caught itself up in his wings and is so strong that the Angel can no longer close them. The storm drives him irresistibly into the future, to which his back is turned, while the rubble-heap before him grows sky-high.

One way to read this is that the pile of rubble is the concrete accumulation  of historical events, while the storm represents the social forces--especially capitalism, in Benjamin's reading--which drive the logic of events.

But consider what lies behind the angel of history: the future. We cannot know what, concretely, will happen in the future. But we know about the social forces--the storm--which are pushing us inexorably into that future. Herein lies the distinction between the study of history and the study of posterity: a theory of posterity is an attempt to turn the angel of history around, and to tell us what it sees.

Where the historian takes empirical data and historical events and uses them to build up a theory of social structure, a theory of posterity begins with existing social forces and structures, and derives possible concrete futures from them. The social scientist must pick through the collection of empirical details--whether in the form of archives, ethnographic narratives, or census datasets--and decide which are relevant to constructing a general theory, and which are merely accidental and contingent features of the record. Likewise, constructing an understanding of the future requires sorting through all the ideas and broad trends and institutions that exist today, in order to determine which will have important implications for later events, and which will be transient and inconsequential.

Because it must construct the particular out of the general, the study of posterity is most effectively manifested in fiction, which excels in the portrayal of concrete detail, whereas the study of the past takes the form of social science, which is built to represent abstractions. Fictional futures are always preferable to those works of "futurism" which attempt to directly predict the future, obscuring the inherent uncertainty and contingency of that future, and thereby stultifying the reader. Science fiction is to futurism what social theory is to conspiracy theory: an altogether richer, more honest, and more humble enterprise. Or to put it another way, it is always more interesting to read an account that derives the general from the particular (social theory) or the particular from the general (science fiction), rather than attempting to go from the general to the general (futurism) or the particular to the particular (conspiracism).

Science fiction can be understood as a way of writing that adopts a certain general theory of posterity, one which gives a prominent role to science and technology, and then describes specific events that would be consistent with that theory. But that generalization conceals a great diversity of different understandings. And so to understand a work of speculative fiction, therefore, it helps to understand the author's theory of posterity.

II. Charles Stross: the Sigmoid Curve and Punctuated Equilibrium

The work of Charles Stross provides an illuminating case study. Much of his work deals with the near-future, and thus is centrally concerned with extrapolating current social trends in various directions. His most acclaimed novel, Accelerando, is an account of "the singularity": the moment when rapidly accelerating technological progress gives rise to incomprehensibly post-human intelligences.

Like most science fiction, Stross's theory of posterity begins from the interaction of social structure and technology. This is rather too simple a formulation, however, as it tends to imply a sort of technological determinism, where technical developments are considered to be a process that goes on outside of society, and affects it as an external force. Closer to the spirit of Stross--and most good SF--is the following from Marx:

Technology discloses man’s mode of dealing with Nature, the process of production by which he sustains his life, and thereby also lays bare the mode of formation of his social relations, and of the mental conceptions that flow from them.

This formulation, to which David Harvey is quite partial, reveals that technology is not an independent "thing" but rather an intersection of multiple human relationships--the interchange with nature, the process of production (and, we might add, reproduction), and culture.

Stross's theory of posterity places technology at the nexus of capital accumulation, consumer culture, and the state, in its function as the guarantor of contract and property rights. Thus in Accelerando, and also in books like Halting State, financial engineering, video games, hackers, intellectual property, and surveillance interact, and all of them push technology forward in particular directions. This is the mechanism by which Stross arrives at his ironic dystopia in which post-human intelligence takes the form of "sentient financial instruments" and "alien business models".

In surveying this vision, a question arises about the way technological development is portrayed in any theory of posterity. It has been a common trope in science fiction to simply take present-day trends and extrapolate them indefinitely into the future, without regard for any major change in the direction of development. Stross himself has observed this tendency: in the first half of the 20th century, the most rapid technological advances came in the area of transportation. People projected this into the future, and consequently science fiction of that era tended to produce things like flying cars, interstellar space travel, etc.

The implicit model of progress that gave rise to these visions was one in which technology develops according to an exponential curve:

The exponential model of development also underpins many popular conceptions of the technological singularity, such as that of Ray Kurzweil. As we reach the rapidly upward-sloping part of the curve, the thinking goes, technological and social change becomes so rapid as to be unpredictable and unimaginable.

But Stross observes that the exponential model probably misconstrues what technical change really looks like. In the case of transportation, he notes that the historical pattern fits a different kind of function:

We can plot this increase in travel speed on a graph — better still, plot the increase in maximum possible speed — and it looks quite pretty; it's a classic sigmoid curve, initially rising slowly, then with the rate of change peaking between 1920 and 1950, before tapering off again after 1970. Today, the fastest vehicle ever built, NASA's New Horizons spacecraft, en route to Pluto, is moving at approximately 21 kilometres per second — only twice as fast as an Apollo spacecraft from the late-1960s. Forty-five years to double the maximum velocity; back in the 1930s it was happening in less than a decade.

Below is the sigmoid curve:

It might seem as though Accelerando, at least, isn't consistent with this model, since it looks more like a Kurzweil-style exponential singularity. But another way of looking at it is that the sigmoid curve simply plays out over a very long time scale: the middle parts of the book portray incredibly rapid changes, but by the end of the book the characters once again seem to be living in a world of fairly sedate development. This environment is investigated further in the followup Glasshouse, which pushes the singularity story perhaps as far as it will  go--to the point where it begins to lose all contact with the present, rendering further extrapolation impossible.

What's most interesting about the sigmoid-curve interpretation of technology, however, is what it implies about the interaction between different technological sectors over the course of history. Rather than ever-accelerating progress, the history of technology now looks to be characterized by something like what paleontologists call Punctuated Equilibrium: long periods of relative stasis, interspersed with brief spasms of rapid evolution. If history works this way, then projecting the future becomes far more difficult. The most important elements of the present mix of technologies are not necessarily the most prominent ones; it may be that some currently insignificant area will, in the near future, blow up to become the successor to the revolution in Information Technology.

In a recent speech, Stross futher elaborates on this framework as it relates to present trends in technology. He goes farther than in previous work in rejecting a key premise of the singularity, which is that the exponential growth in raw computing power will continue indefinitely:

I don't want to predict what we end up with in 2020 in terms of raw processing power; I'm chicken, and besides, I'm not a semiconductor designer. But while I'd be surprised if we didn't get an order of magnitude more performance out of our CPUs between now and then — maybe two — and an order of magnitude lower power consumption — I don't expect to see the performance improvements of the 1990s or early 2000s ever again. The steep part of the sigmoid growth curve is already behind us.

However, Stross notes that even as the acceleration in processor powers drops, we are seeing a distinct kind of development based on ubiqitous fast wireless Internet connections and portable computing devices like the iPhone. The consequence of this is to erode the distinction between the network and "real" world:

Welcome to a world where the internet has turned inside-out; instead of being something you visit inside a box with a coloured screen, it's draped all over the landscape around you, invisible until you put on a pair of glasses or pick up your always-on mobile phone. A phone which is to today's iPhone as a modern laptop is to an original Apple II; a device which always knows where you are, where your possessions are, and without which you are — literally — lost and forgetful.

This is, essentially, the world of Manfred Macx in the opening chapters of Accelerando.  It is incipient in the world of Halting State, and its further development will presumably be interrogated in that book's sequel, Rule 34.

III. William Gibson and the Technicians of Culture

William Gibson is another writer who has considered the near future, and his picture in Pattern Recognition and Spook Country maps out a consensus future rather similar to Stross's. In particular, the effacing of the boundary between the Internet and everyday life is ever-present in these books, right down to a particular device--the special glasses which project images onto the wearer's environment--that plays a central role for both writers.

Yet technology for Gibson is embedded in a different social matrix. The state and its bureaucracy are less present than in Stross; indeed, Gibson's work is redolent of 1990's style imaginings of the globalized world, after the withering of the nation-state. Capital, meanwhile, is ever-present, but its leading edge is quite different. Rather than IP cartels or financiers or game designers, the leading force in Gibson's world is the culture industry, and in particular advertizing and marketing.

This is in keeping with Gibson's general affinity for, and deep intuitive understanding of, the world of consumer commodities. Indeed, his books are less about technology than they are meditations on consumer culture and its objects; the loving way in which brands and products are described reveals Gibson's own infatuation with these commodities. Indeed, his instincts are so well tuned that an object at the center of Pattern Recognition turned out to be a premonition of an actual commodity.

This all leads logically to a theory of the future in which changes in society and technology are driven by elements of the culture industry: maverick ad executives, cool-hunters, former indie-rock stars and avant-garde artists all figure in the two recent works. Gibson maintains a conception of the high culture-low culture divide, and the complex interrelation between the two poles, which is lacking in Stross. The creation and re-creation of symbols and meaning is the central form of innovation in his stories.

Insofar as Gibson's recent writing is the working out of a social theory, its point of departure is Fredric Jameson's theorization of postmodern capitalist culture. Jameson observed back in the 1970's that one of the definitive characteristics of late capitalism was that "aesthetic production today has become integrated into commodity production generally". Gibson, like Stross and other science fiction writers, portrays the effects of rapid change in the technologies of production, but in this case it is the technologies of aesthetic production rather than the assembly line, transportation, or communication.

And it does indeed seems that cultural innovation and recombination has accelerated rapidly in the past few decades. But in light of Stross, the question becomes: are we reaching the top of the sigmoid curve? It sometimes seems that we are moving into a world where Capital is more an more concerned with extracting rents from the control of "intellectual property" rather than pushing toward any kind of historically progressive technological or even cultural innovation. But I will save the working out of that particular theory of posterity for another post.

Contagious Delegitimation

November 16th, 2010  |  Published in Politics

Felix Salmon has an interesting post in which he notes how the Federal Reserve is being politicized by the debate over Quantitative Easing 2: Electric Boogaloo:

Bernanke "was already making a high-stakes economic bet with QE2," he says, "and now it’s a political bet, as well". If QE2 doesn’t work — Sloan raises the specter that it "could imperil the dollar and our financial system" — then it’s not just the economy which will be harmed, but also the Fed’s long-term credibility and pre-eminence. In fact, the politics of QE2 are already hobbling the Fed’s freedom of movement . . .

It's worth recalling how the Fed found itself in this position. Most Keynesian economists believe that the first-best policy choice under present economic conditions is another stimulus bill. But congress will not pass such a bill. QE2 is an inferior second-best policy, which is being employed because the Fed is able to act independently without getting permission from politicians. But this in turn exposes the Fed to the kind of political battles that Salmon is describing.

Which makes me think that we're watching as the entire American system of government is destabilized by what we might call "contagious delegitimation". What happens is that one part of the government loses its ability to act in the face of a crisis, which causes it to lose its legitimacy. This puts pressure on other government institutions to take up the slack. But because of the inability to coordinate with other, now non-functional parts of the government, these institutions are less effective than they might be, and so they to lose their legitimacy. There is a feedback loop here, as a loss of legitimacy leads to a diminished capacity to act, which in turn further erodes legitimacy.

The legitimation crisis in the United States probably begins in the states. They are required to balance their budgets, and are thus forced to make pro-cyclical budget adjustments that deepen the recession. Moreover, in big states like New York and California, existing dysfunctions of the political system have accreted over time and made the government both ineffectual and illegitimate in the eyes of voters.

The federal government was able to step in and support the states to some extent, through programs like the extension of unemployment insurance and the stimulus bill. But congress, too, is increasingly unable to act, because Republicans have realized that their best chance of winning the Presidency in 2012 is to sow chaos and intransigently block all of the Democratic initiatives.

Which brings us back to the Fed. Kevin Drum suggests that the real impact of QE2 is likely to be relatively small no matter what, which means that Republicans will be able to spin the result in their favor and thereby delegitimate the Fed. So where does the political system go after that? The Supreme Court has been somewhat insulated from the wave of delegitimation, but that's partly because the Roberts court has been careful to promote conservative policy goals quietly, while paying false deference to precedent. If the court chooses to make a major political play--by ruling Obama's health care bill unconstitutional, for example--than it will be sucked into the vortex as well.

This is the backdrop against which the 2010 midterm elections ought to be understood. Control over congress seems to be whipping back and forth between the parties--not because either party can put a compelling vision before voters, but rather because the electorate delivers one no-confidence vote after another to parties which are both captured by narrow ruling class interests. As the political scientist Thomas Ferguson puts it, "The political system is disintegrating, probably heading toward a real breakdown of some sort."

And this is not to speak of what is happening internationally, where American legitimacy also appears to be decaying even though the U.S. remains economically and militarily central to the global order.

Inflation and unemployment

September 20th, 2010  |  Published in Politics

An interesting paragraph from James Surowiecki in the New Yorker, who observes that an increase in inflation would be good for employment growth:

Unfortunately, when the Fed meets this week, it’s unlikely to be talking up the merits of an inflation boost. Central bankers are congenitally obsessed with the dangers of inflation and are more concerned with stable prices than with lost jobs. Also, the Fed, by its nature, looks after the interests of lenders, for whom inflation is generally bad news. But there’s a more basic reason, too: people really, really hate inflation. In polls, voters regularly cite high prices as one of their biggest concerns, even when inflation is low. A 2001 study that looked at the “macroeconomics of happiness” found that higher inflation put a severe dent in how happy people reported themselves to be. The distaste for inflation is such that a 1996 study (titled, aptly, “Why Do People Dislike Inflation?”), by the Yale economist Robert Shiller, found that, in countries around the world, sizable majorities said that they would prefer low inflation and high unemployment to high inflation and low unemployment, even if that meant that millions of extra people would go without work.
If we stipulate that there is in fact a trade-off between inflation and unemployment, and that people really hate inflation, then a policy of minimizing inflation would appear to result in diffuse benefits and concentrated costs. That is, most people benefit (at least psychologically) from lower inflation, but at the cost of imposing severe hardship on the unemployed who would otherwise have jobs.
All things being equal, one would expect this to be an unlikely and unstable policy outcome. It's generally hard to impose a policy with concentrated costs and diffuse benefits, since the losers will tend to care about opposing the policy more than the winners will care about defending it. In this case, the losers (the unemployed) should mobilize to demand employment-generating policies, even at the cost of increasing inflation.
In this case, such a dynamic is probably blocked by the fact that the unemployed are a weak and disorganized interest group. Politicians are also able to avoid blame for unemployment by what Paul Pierson called the strategy of obfuscation, in which politicians act (or fail to act) in ways that conceal the relationship between their actions and policy outcomes. In this case, there are two stages of obfuscation. First, most people probably aren't very aware of the degree to which the Obama administration has refused to change Federal Reserve policy on inflation, chiefly through their desultory approach to getting Federal Reserve Board nominees confirmed. Second, people also don't really understand the relationship between inflation targeting and unemployment. As a result, there's no demand for a policy that promotes both inflation and employment.
In this context it's all the more important that the people most affected by bad policy get organized. Hence the importance of things like the Unemployed Workers Action Group.

The Future of Music

May 8th, 2010  |  Published in Art and Literature, Everyday life, Political Economy

Recently The Atlantic published a piece about how "a generation of file-sharers is ruining the future of entertainment". The piece is pretty silly, since it conflates "the future of entertainment" with "the profitability of the major entertainment corporations", and in particular the record industry. Marc Weidenbaum has a nice explanation of how absurd that is. But even if you believe that the profitability of these companies is somehow necessary for us to have culture, the concern for their health seems to me wildly disingenuous and misplaced. Their troubles are not a function of "freeloaders" or the evils of the Internet. They are a result of greed and an unwillingness to part with an obsolete business model--an unwillingness that has been encouraged and abetted by the state and its approach to intellectual property law.

Here's my solution for the record companies. All they need to do is offer a service that provides:

• In a high-quality format...
• With absolutely no copy protection or other Digital Rights Management...
• For no more than \$5 per month.

Why do I think this might be a success? Because it already exists. The SoulSeek network is a file-sharing service that contains a huge selection--at least for the kinds of music I tend to like. And though it's free to use, for a \$5 donation you get a month of "privileges", which essentially put you at the front of the line when downloading from other users, which makes the whole experience much faster.

I've given a lot of money to SoulSeek over the past couple of years--nearly \$5 a month, as it turns out. And I would have happily given that money to a similar service that gave full legal access to copyrighted downloads, and passed some of that money on to the artists. But it doesn't exist, because the record companies still believe they can force us to pay for \$12 CDs and \$1 iTunes song downloads. They don't cling to that model because it's the only one possible, but because they're too greedy and short-sighted to try anything else.

Of course, the record companies and their apologists would immediately claim that the model I've described isn't economically viable, and they could never make enough money from it to do all the good work they supposedly do to find and develop young artists. But even at \$5 a month, there's a lot of money to be made here. If unlimited downloads at a monthly rate caught on, it could come to be something like cable TV that a large percentage of households pay for as a matter of routine. I don't think this is all that implausible: people like music almost as much as they like TV, and what I'm proposing would be an order of magnitude cheaper than cable.

According to the cable providers' trade assocation, there are 62.1 million basic cable subscriptions in the United States. This number of online music subscriptions, at \$5 per month, would bring in around \$3.7 billion of revenue. In 2005, total revenue from the sale of recorded music in the U.S. was about \$4.8 billion. When you consider how much cheaper digital distribution is than manufacturing and shipping physical media, the unlimited-downloads model looks pretty competitive with traditional sales.

Now, maybe this model wouldn't catch on in the way I've suggested. But if people continue to prefer buying their music a la carte, there's no reason a subscription-based service couldn't coexist with iTunes style pay-per-download. Unfortunately, there's not a whole lot of incentive for the big copyright cartels to move toward the system I've sketched out here, because the Obama administration seems intent on using the repressive power of the state to force people into consuming media in the way the media conglomerates would prefer. Atrocities like the ACTA treaty are moving us toward a world of pervasive surveillance in which our cultural wealth is kept under lock and key for the benefit of a few wealthy copyright-holders.

In light of all this, the correct response to anyone who decries the moral perfidy of file-sharers is derisive laughter. The media companies have chosen to transition into a form of rentier capitalism that requires them to wage war on their own consumers. In that environment, it can hardly be surprising that the consumers fight back.

The Abuse of Statistical Significance: A Case Study

April 18th, 2010  |  Published in Social Science, Statistics

For years now--decades, in fact--statisticians and social scientists have been complaining about the practice of testing for the presence of some relationship in data by running a regression and then looking to see whether some coefficient is statistically significant at some arbitrary confidence level (say, 95 percent.) And while I completely endorse these complaints, they can often seem rather abstract. Sure, you might say, the significance level is arbitrary, and you can always find a statistically significant effect with a big enough sample size, and statistical significance isn't the same as substantive importance. But as long as you're sensitive to these limitations, surely it can't hurt to use statistical significance as a quick way of checking whether you need to pay attentio to a relationship between variables, or whether it can be safely ignored?

As it turns out, a reliance on statistical significance can lead you to a conclusion that is not just imprecise or misleading, but is in fact the exact opposite of the correct answer. Until now, I've never found a really simple, clear example of this, although the stuff discussed in Andrew Gelman's "The Difference Between 'Significant' and 'Not Significant' Is Not Statistically Significant" is a good start. But now along comes Phil Birnbaum with a report of a really amazing howler of a bad result, driven entirely by misuse of statistical significance. This is going to become my go-to example of significance testing gone horribly wrong.

Birnbaum links to this article, which used a study of cricket players to argue that luck plays a big role in how people fare in the labor market. The basic argument is that cricket players do better at home than on the road, but that teams don't take this into account when deciding what players to keep for their team. The result is that some players are more likely to be dropped just because they had the bad luck to make their debut on the road.

Now, I happen to be inclined a priori to agree with this argument, at least for labor markets in general if not cricket (which I don't know anything about). And perhaps because the argument is intuitively compelling, the paper was discussed on the New York Times Freakonomics blog and on Matt Yglesias's blog. But the analysis that the authors use to make their case is entirely bogus.

Birnbaum goes into it in excellent detail, but the gist of it is as follows. They estimate a regression of the form: $Pr(dropped) = A + B*Avg + C*HomeDebut + D*Avg*HomeDebut$

In this model, Avg is your average as a cricket bowler, and HomeDebut is 1 if you debut at home, 0 if you debut on the road.  We expect coefficient B to be negative--if your average is lower, you have a better chance of being dropped. But if teams are taking the home field advantage into account, coefficients C and D should be positive, indicating that teams will value the same average more if it was achieved on the road rather than at home.

And what did the authors find? C and D were indeed positive. This would suggest that teams do indeed discount high averages that were achieved at home relative to those achieved on the road. Yet the authors write:

[D]ebut location is superfluous to the retention decision. Information about debut location is individually and jointly insignificant, suggesting that these committees focus singularly on debut performance,  regardless of location. This signal bias suggests that batsmen lucky enough to debut at home are more likely to do well on debut and enjoy greater playing opportunities.

How do they reach this conclusion? By noting that the coefficients for the home-debut variables are not statistically significant. But as Birnbaum points out, the magnitudes and directions of the coefficients are completely consistent with what you might expect to find if there was in fact no home-debut bias in retention decisions. And the regressions are only based on 431 observations, meaning that large standard errors are to be expected. So it's true that the confidence intervals on these coefficients include zero--but that's not the same as saying that zero is the most reasonable estimate of their true value! As the saying goes, absence of evidence is not evidence of absence. As Birnbaum says, all these authors have really shown is that they don't have enough data to properly address their question.

Birnbaum goes into all of this in much more detail. I'll just add one additional thing that makes this case especially egregious. All the regressions use "robust standard errors" to correct for heteroskedasticity. Standard error corrections like these are very popular with economists, but this is a perfect example of why I hate them. For what does the robustness-correction consist of? In general, it makes standard errors larger. This is intended to decrease the probability of a type I error, i.e., finding an effect that is not there. But by the same token, larger standard errors increase type II error, failing to find an effect that is there. And in this case, the authors used the failure to find an effect as a vindication of their argument--so rather than making the analysis more conservative -i.e., more robust to random variation and mistaken assumptions--the "robust" standard errors actually tip the scales in favor of the paper's thesis!

It's entirely possible that the authors of this paper were totally unaware of these problems, and genuinely believed their findings because they had so internalized the ideology of significance-testing. And the bloggers who publicized this study were, unfortunately, engaging in a common vice: promoting a paper whose findings they liked, while assuming that the methodology must be sound because it was done by reputable people (in this case, IMF economists.) But things like this are exactly why so many people--both inside and outside the academy--are instinctively distrustful of quantitative research. And the fact that Phil Birnbaum dug this up exemplifies what I love about amateur baseball statisticians, who tend to be much more flexible and open minded in their approach to quantitative methods. I suspect a lot of trained social scientists would have read over this thing without giving it a second though.

Republican Census Protestors: Myth or Reality?

April 1st, 2010  |  Published in Politics, Statistical Graphics, Statistics

April 1 is "Census Day", the day on which you're supposed to have turned in your response to the 2010 census. Of course, lots of people haven't returned their form, and the Census Bureau even has a map where you can see how the response rates look in different parts of the country.

Lately, there's been a lot of talk about the possibility that conservatives are refusing to fill out the census as a form of protest. This behavior has been encouraged by the anti-census rhetoric of elected officials such as Representatives Michelle Bachman (R-MN) and Ron Paul (R-TX).  In March, the Houston Chronicle website reported that response rates in Texas were down, especially in some highly Republican areas. And conservative Republican Patrick McHenry (R-NC) was so concerned about this possible refusal--which could lead conservative areas to lose federal funding and even congressional representatives--that he went on the right-wing site redstate.com to encourage conservatives to fill out the census.

Thus far, though, we've only heard anecdotal evidence that right-wing census refusal is a real phenomenon. Below I try to apply more data to the question.

The Census Bureau provides response rates by county in a downloadable file on their website.  The data in this post were downloaded on April 1. To get an idea of how conservative a county is, we can use the results of the 2008 Presidential election, and specifically Republican share of the two-party vote--that is, the percentage of people in a county who voted for John McCain, with third-party votes excluded. The results look like this:

It certainly doesn't look like there's any overall trend toward lower participation in highly Republican counties, and indeed the correlation between these two variables is only -0.01. In fact, the highest participation seems to be in counties that are neither highly Democratic nor highly Republican, as shown by the trend line.

So, myth: busted? Not quite. There are some other factors that we should take into account that might hide a pattern of conservative census resistance. Most importantly, many demographic groups that tend to lean Democratic, such as the poor and non-whites, are also less likely to respond to the census. So even if hostility to government were holding down Republican response rates, they still might not appear to be lower than Democratic response rates overall.

Fortunately, the Census Bureau has a measure of how likely people in a given area are to be non-respondents to the census, which they call the "Hard to Count score". This combines information on multiple demographic factors including income, English proficiency, housing status, education, and other factors that may make people hard to contact. My colleagues Steve Romalewski and Dave Burgoon have designed an excellent mapping tool that shows the distribution of these hard-to-count areas around the county, and produced a report on the early trends in census response around the country.

We can test the conservative census resistance hypothesis using a regression model that predicts 2010 census response in a county using the 2008 McCain vote share, the county Hard to Count score, and the response rate to the 2000 census. Including the 2000 rate will help us further isolate any Republican backlash to the census, since it's a phenomenon that has supposedly arisen only within the last few years. Since different counties can have wildly differing population densities, the data is weighted according to population.* The resulting model explains about 70% of the variation in census response across counties, and the equation for predicting the response looks like this:

The coefficient of 0.06 for the Republican vote share variable means that when we control for the 2000 response rate and the county HTC score, Republican areas actually have higher response rates, although the effect is pretty small.  If two counties have identical HTC scores and 2000 response rates but one of them had a 10% higher McCain vote in 2008, we would expect the more Republican county to have a 0.6% higher census 2010 response rate. **

Now, recall that the original news article that started this discussion was about Texas. Maybe Texas is different? We can test that by fitting a multi-level model in which we allow the effect of Republican vote share on census response to vary between states. The result is that rather than a single coefficient for the Republican vote share (the 0.06 in the model above), we get 50 different coefficients:

Or, if you prefer to see your inferences in map form:

The reddish states are places where having more Republicans in a county is associated with a lower response rate to the census, and blue states are places where more Republican counties are associated with higher response rates.

We see that there are a few states where Republicans seem to have lower response rates than Democratic ones, such as South Carolina and Nebraska. Even here, though, the confidence intervals are crossing zero or close to it. And Texas doesn't look particularly special, the more Republican areas there seem to have better response rates (when controlling for the other variables), just like most other places.

So given all that, how can we explain the accounts of low response rates in Republican areas? The original Houston Chronicle news article says that:

In Texas, some of the counties with the lowest census return rates are among the state's most Republican, including Briscoe County in the Panhandle, 8 percent; King County, near Lubbock, 5 percent; Culberson County, near El Paso, 11 percent; and Newton County, in deep East Texas, 18 percent.

OK, so let's look at those counties in particular. Here's a comparison of the response rate to the 2000 census, the response this year, and the response that would be predicted by the model above. (These response rates are higher than the ones quoted in the article, because they are measured at a later date.)

 Population Response, 2000 Response, 2010 Predicted Response Error Republican vote, 2008 King County, TX 287 48% 31% 43% 12% 95% Briscoe County, TX 1598 61% 41% 51% 10% 75% Culberson County, TX 2525 38% 34% Newton County, TX 14090 51% 34% 43% 9% 66%

The first thing I notice is that the Chronicle was fudging a bit when it called these "among the state's most Republican" counties. Culberson county doesn't look very Republican at all! The others, however, fit the bill. And for all three, the model does substantially over-predict census response.  (Culberson county has no data for the 2000 response rate, so we can't get a prediction there.) What's going on here? It looks like maybe there's something going on in these counties that our model didn't capture.

To understand what's going on, let's take a look at the ten counties where the model made the biggest over-predictions of census response:

 Population Response, 2000 Response, 2010 Predicted Response Error Republican vote, 2008 Duchesne County, UT 15701 41% 0% 39% 39% 84% Forest County, PA 6506 68% 21% 57% 36% 57% Alpine County, CA 1180 67% 17% 49% 32% 37% Catron County, NM 3476 47% 17% 39% 22% 68% St. Bernard Parish, LA 15514 68% 37% 56% 19% 73% Sullivan County, PA 6277 63% 35% 53% 18% 60% Lake of the Woods County, MN 4327 46% 27% 45% 18% 57% Cape May County, NJ 97724 65% 36% 54% 18% 54% Edwards County, TX 1935 45% 22% 39% 17% 66% La Salle County, TX 5969 57% 26% 43% 17% 40%%

I have a hard time believing that the response rate in Duchesne county, Utah is really 0%, so that's probably some kind of error. But as for the rest, most of these counties are heavily Republican too, which suggests that maybe there is some phenomenon going on here that we just aren't capturing. But now look at the counties where the model made the biggest under-prediction--where it thought response rates would be much lower than they actually were:

 Population Response, 2000 Response, 2010 Predicted Response Error Republican vote, 2008 Oscoda County, MI 9140 37% 66% 36% -30% 55% Nye County, NV 42693 13% 47% 22% -25% 57% Baylor County, TX 3805 51% 66% 45% -21% 78% Clare County, MI 31307 47% 62% 42% -20% 48% Edmonson County, KY 12054 55% 65% 46% -19% 68% Hart County, KY 18547 62% 68% 49% -19% 66% Dare County, NC 33935 35% 57% 39% -18% 55% Lewis County, KY 14012 61% 66% 48% -18% 68% Gilmer County, WV 6965 59% 63% 45% -18% 59% Crawford County, IN 11137 62% 68% 51% -17% 51%

Most of these are Republican areas too!

So what's going on? It's hard to say, but my best guess is that part of it has to do with the fact that most of these are fairly low-population counties. With a smaller population, these places are going to show more random variability in their average response rates than the really big counties. Smaller counties tend to be rural counties, and rural areas tend to be more conservative. Thus, it's not surprising that the places with the most surprising shortfalls in census response are heavily Republican--and that the places with the most surprising high response rates are heavily Republican too.

At this point, I have to conclude that there really isn't any firm evidence of Republican census resistance. That's not to say it doesn't exist. I'm sure it does, even if it's not on a large enough scale to be noticeable in the statistics.  It's also possible that the Republican voting variable I used isn't precise enough--the sort of people who are most receptive to anti-census arguments are probably a particular slice of far-right Republican. And it's always difficult to make any firm conclusions about the behavior of individuals based on aggregates like county-level averages, without slipping into the ecological fallacy. Nonetheless, these results do suggest the strong possibility that the media have been led astray by a plausible narrative and a few cherry-picked pieces of data.

• Using unweighted models doesn't change the main conclusions, although it does bring some of the Republican vote share coefficients closer to zero--meaning that it's harder to conclude that there is any relationship between Republican voting and census response, either positive or negative.

** All of these coefficients are statistically significant at a 95% confidence level.

Pessimism of the Intellect, revisited

March 22nd, 2010  |  Published in Politics, Statistical Graphics

In light of recent events and the ambivalence expressed in the Health Care Reform thread I started at the Activist, it seemed appropriate to resurrect the graphic from this post: