Art and Literature

If you’re not dying, you’re not learning

August 18th, 2011  |  Published in Games

I’ve been making an effort to read and engage more with blogs written by women, because the recent online conversations I’ve been involved with have been oppressively dude-heavy. I’ve also been meaning to write about gaming, because I think people who love games and take them seriously should be out of the closet about it, and not give in to the stigma that still tends to relegate games to a status below that of other art forms. Fortuitously, I spotted an opportunity to hit both targets at once.

Alyssa Rosenberg is writing about her experience playing Portal. It’s a wonderful game, which I may have more to say about later, but what caught my eye was something more general about games. Rosenberg says that one thing holding her back in that game, and in games in general, is a discomfort with dying:

I’ve figured out one of the things that kept me from playing games regularly for a long time: I find dying in-game incredibly stressful.

And,

I’m surprised that there isn’t more conversation about what dying in game makes us feel about our own deaths.

I completely agree that constant player death is both a central feature of video games, and one that gets insufficient discussion. But either Rosenberg just reacts to games differently than I do, or else she has yet to get past something that I eventually dealt with when I was getting back into video games. Because while I understand the first sentiment I quoted, I think that the second is really pointing in the wrong direction in terms of helping us (or at least me) understand the meaning of video game death.

I got back into games a couple of years ago, after hardly playing them at all since the 16-bit era. And I initially struggled with in-game death as well, but I would characterize the issue a bit differently. As strange as this seems, I don’t view video game death as a signifier for real world death at all; rather, death in games is a metaphor for failure in life. After all, death in games is unlike real world death in the only way that really matters: after you die, you get to go back and try again.

This argument sort of relates to a long-running debate in games criticism between so-called “narratologists”, who treat games as vehicles for story and character and hence tend to take the story elements of the game more literally, and “ludologists” who view games chiefly as formal systems and ludic experiences (see for instance this this debate between Tom Bissell and Simon Ferrari). But I think it cuts across it in some ways.

I really came to terms with the nature of in-game death when I was playing through the Mass Effect games, which are some of my favorites of recent years. Being bad at games and out of practice, I wasn’t very good at the action portions of the games. And yet I didn’t want to turn down the difficulty to “easy” just to get through the story–that felt wrong, unsatisfying, and cheap. I wanted to beat the game on one of the higher difficulties, in order to feel like I had really mastered it, and really overcome a challenge.

But doing that meant dying. A lot. And I eventually realized that what I disliked about that wasn’t that dying somehow reminded me of my own mortality, but that it dredged up my fear of failure. It was as though the game was constantly reminding me how inept I was, how far my abilities fell short of my ambitions. And so the only way to get myself through the experience, and to accept repeatedly dying, was to recontextualize what failure meant. Dying no longer meant that I was bad at the game (although, proximately, it did mean that). Instead, dying meant that I had the game on a high enough difficulty level. Dying was proof that I was challenging myself, putting myself in situations where I would be forced to get better, forced to learn new ways of getting through each level.

In that way, I came to see dying as a positive sign over the course of those Mass Effect play-throughs. In fact, if I went too long without dying, I would take this as a sign that I needed to turn the difficulty slider up to the next level. I even coined a motto that I’d repeat to myself, in order to ward off complacency: If you’re not dying, you’re not learning. And if playing games has any positive value for the rest of my life, it’s summed up in that slogan. One thing that I think has tended to hold me back in a lot of areas–and I think this is true for a lot of people who are used to being successful and precocious–is a fear of trying something and failing, and thereby being exposed somehow as an incompetent or a fraud. Games helped me get a little bit better at accepting failure as a natural part of the learning process, a way of figuring out what you need to do to be successful in the future.

That’s an important thing to internalize, whether you apply it to submitting papers to journals, applying for jobs, asking people out on dates, or suggesting guitar parts to your band. Which isn’t to say that games have to be “moral vitamins” in order to be artistically legitimate, just that in this particular case they did sort of work that way for me.

Now I just need Horning to tell me how I’m actually brainwashing myself into neoliberal subjectivity…

The London Riots: A Musical Debate

August 8th, 2011  |  Published in Art and Literature, Politics

Image via Jodi Dean:

Point: Jello Biafra, Dead Kennedys

“Riot-playing into their hands \ Tomorrow you’re homeless \ Tonight it’s a blast.”

Lyrics.

Counterpoint: Boots Riley, The Coup

“That’s not chaos, that’s progress.”

Lyrics.

Eight Hours For What They Will

May 30th, 2011  |  Published in Art and Literature, Time, Work

The other day I re-watched John Carpenter’s They Live–which, for the record, is a pretty good satire of Reagan-era America, and deserves to be remembered for more than just that stupid Shepard Fairey sticker campaign. While watching, I noticed something pretty great that I missed the first time through. It shows up after the main character puts on magic sunglasses, which allow him to see that the billboards around him actually contain secret brainwashing messages. Most of these just say things like “consume” and “obey”. But check out the sign in the upper left corner of this picture:

That command, of course, is a riff on an old slogan of the 19th century labor movement, which demanded the eight-hour day using signs like this one:

As David Roediger and Philip Foner remark in their great history of American labor and the working day:

[T]he cry “Eight hours for work, eight hours for rest, eight hours for recreation,” acted as more than a common denominator. It embodied . . . the highest aspirations of the working population. It expressed cherised values. . . . In making the eight-hour system the key to equal education for children, to the continued mental development of adults, to the defense of republican virtue and class interest by an enlightened and politically active citizenry, to health, to vigor, and to social life, supporters viewed their demand as an initial step to major changes, not as a niggling reform. (pp. 98-99)

As is well known, the demand for shorter hours mostly disappeared from organized labor’s agenda after World War II, for complex and disputed reasons. The sign I saw in They Live is one consequence of abdicating the postive class argument for shorter hours. By the time the movie was made in 1988, the eight-hour movement’s greatest slogan could come to seem not like a cherished victory of the working class, but rather as a piece of dystopian propaganda. “Eight hours recreation” becomes the command to “play eight hours”, and this “play” is refigured as obligatory participation in consumerist culture rather than the opportunity for political, intellectual and moral development that it signified for the eight-hour campaigners. Perhaps it’s not surprising, then, that these days long hours are often portrayed as an issue of individual preferences or “workaholic” psychology, rather than the outcome of organized labor’s long political defeat.

I have a feeling this little vignette will end up in my dissertation somehow, although I don’t think I mentioned doing any cultural studies in my fellowship proposal.

Idiocracy’s Theory of the Future

January 12th, 2011  |  Published in Art and Literature, Political Economy

Mike Judge’s Idiocracy is a pretty smart and funny movie, which touches on some themes I’ve recently written about. But it’s also a widely underappreciated and misunderstood film. Perhaps that’s because one of the people who seems to misunderstand it the most is its own writer and director, Mike Judge.

The basic premise of the film, as per IMDB:

Private Joe Bauers, the definition of “average American”, is selected by the Pentagon to be the guinea pig for a top-secret hibernation program. Forgotten, he awakes 500 years in the future. He discovers a society so incredibly dumbed-down that he’s easily the most intelligent person alive.

The rest of the film is an extended satirical riff on this idiotic future society. Its residents are both unbelievably crude and endlessly capable of falling for consumerist marketing bullshit. With regard to the former: Starbucks now offers hand jobs, everyone regards reading and thinking as activities for “fags”, and one of the film’s set pieces involves a #1 hit film called “Ass”, consisting of nothing but the image described in the title. In a climactic scene Joe Bauers (played by Luke Wilson) addresses Congress, wistfully declaring that:

there was a time in this country, a long time ago, when reading wasn’t just for fags and neither was writing. People wrote books and movies, movies that had stories so you cared whose ass it was and why it was farting, and I believe that time can come again!

Meanwhile, everyone in the future mindlessly repeats advertising slogans as though they were a scientific consensus. The threat of famine looms because everyone insists on watering crops with a noxious energy drink called Brawndo, while insisting that “it’s got electrolytes . . . they’re what plants crave!” It’s left to Joe Bauers to convince his moronic fellow humans of the virtues of old fashioned water.

This sounds like the sort of thing your average anti-corporate liberal might enjoy, although I’d note that liberal yuppies are hardly immune to this sort of irrational marketing hype. But the movie made a lot of people uncomfortable, and it has been mostly forgotten since its 2006 release. In part, that’s because of the generally elitist “most people are idiots” vibe that Judge evokes. But more specifically, I think it’s because of the film’s overtly misanthropic, eugenics-minded opening:

This reaction from Manohla Dargis is typical:

“Idiocracy” expresses the kind of fear lampooned, consciously or not, in the old joke about revolting masses. (Messenger: “The masses are revolting!” King: “You’re telling me!”) It opens with a comparison between trailer-trash types, with low I.Q.’s, who freely propagate, and smarty-pants types who fret about conceiving, using every excuse to find the perfect time to have children. In the end the low I.Q.-ers overrun the intelligent, who die off, which is funny if you think that only certain kinds of people should reproduce. An equal-opportunity offender, Mr. Judge can wield satire like a sledgehammer, so it’s no surprise that he doesn’t bother with the complexities of class and representation in a bit about the dire consequences of a birth dearth.

This bit of the movie is every bit as offensive and reactionary as Dargis suggests at is, and its stupidity is pretty much summed up in this xkcd cartoon. But the tragedy of the whole movie is that this premise is totally unnecessary. It’s completely possible to explain the emergence of the Idiocracy future based on sociological and political-economic themes that have nothing to do with genetic determinism, while leaving the rest of the movie mostly unchanged.

To me, one of the most interesting and suggestive bits of the movie is the following exchange toward the end of the story:

 Joe and the Cabinet Members are gathered around a VIDEO PHONE 
 talking to the CEO OF RAUNCBO, who's in his office, panicking.  
 We hear people rioting outside his building and occasionally 
 bottles and debris hit his window.

                       RAUNCHO CEO
           What happened?!

                       JOE
           Ah... Well, we switched the crops to 
           water.

                       RAUNCHO CEO
           I'm not talking about that.
                (points to a computer 
                screen, freaked out)
           Our sales are all like, down. Way 
           down! The stock went to zero and the 
           computer did auto-layoff on 
           everybody!

                       ATTORNEY GENERAL
           Shit! Almost everyone in the country 
           works for Rauncho!

                       RAUNCHO CEO
           Not anymore!  And the computer said 
           everyone owes Rauncho money! 
           Everyone's bank account is zero now!

What does this exchange tell us about the film’s implicit theory of posterity?

  1. The future economy is highly automated, to the point that even the management of companies is done automatically by a computer.
  2. People nonetheless need money to pay for things, which they get by working for Brawndo (which is called “Rauncho” in this earlier version of the screenplay). It’s not clear what they do for their money, but it can’t be very important in light of their obvious stupidity and the above-noted automation.
  3. The continued stability of this society is therefore dependent on the existence of a business which does not actually improve anyone’s material standard of living–indeed, it is decreasing it by killing all the crops.

The theory of posterity the grounds Idiocracy, it seems to me, is a close cousin of Anti-Star Trek: an economy that needs humans as consumers, but makes them mostly superfluous as producers.

So how does this explain the fact that everyone is such a moron? Well, consider what would happen to education in a society like this. If the productive economy is all run by computers, then there’s no need to teach people how to make things, or how anything actually works. On the contrary, it would be economically beneficial to encourage delusions about the magical properties of consumer products, the better to ensure that people will continue to drink Brawndo rather than water. In other words, there is no economic incentive to produce intelligence. We can imagine that at some point in the past, legitimate institutions of higher education were dismantled (perhaps by the people Diane Ravitch discusses here), and replaced by things like Costco Law School.

I really wish someone would make a movie that’s as funny as Idiocracy without falling back on such lazy right-wing premises. On the other hand, it’s intriguing that Judge could end up making a film that mostly functions as a radical critique even though it’s based on a reactionary assumption. Idiocracy does illuminate a dangerous trend in contemporary capitalism–one that has nothing to do with the wrong people having babies, and everything to do with a system that increasingly reproduces itself by producing stupidity in the population. The movie’s only mistake is to think that our genes can save us from stupidity, when it seems far more defensible to say that “intelligence” is some combination of socially nurtured ability and statistical myth.

Translating English into English

January 4th, 2011  |  Published in Art and Literature, Sociology

So it seems there’s going to be a censored version of The Adventures of Huckleberry Finn that replaces the word “nigger” with “slave”. My initial reaction was to agree with Doug Mataconis that this is both offensive and stupid. It struck me as being of a piece with the general tendency of white Americans to deal with the existence of racism by ignoring it rather than talking about it.

And I guess I still feel that way, but after reading Kevin Drum’s take I’m more sympathetic to Alan Gribben, the Twain scholar responsible for the new censored version. Gribben says that because of the extreme visceral reactions people have to the word “nigger”, most teachers today feel they can’t get away with assigning Huck Finn to their students, even if they’d really like to. So the choice was to either consent to this bowdlerization or else let the book gradually disappear from our culture altogether. I’m still a bit torn about it–and I think that the predicament of the teachers Gribben talked to is indicative of precisely the cowardly attitudes toward race that I described above. But I’m willing to accept that censoring the book was the least-bad response to this unfortunate state of affairs.

However, what most caught my attention about Kevin Drum’s post on the controversy was this:

In fact, given the difference in the level of offensiveness of the word nigger in 2010 vs. 1884, it’s entirely possible that in 2010 the bowdlerized version more closely resembles the intended emotional impact of the book than the original version does. Twain may have meant to shock, but I don’t think he ever intended for the word to completely swamp the reader’s emotional reaction to the book. Today, though, that’s exactly what it does.

That got me thinking a more general thought I’ve often had about our relationship to old writings: it’s a shame that we neglect to re-translate older works into English merely because they were originally written in English. Languages change, and our reactions to words and formulations change. This is obvious when you read something like Chaucer, but it’s true to a more subtle degree of more recent writings. There is a pretty good chance that something written in the 19th century won’t mean the same thing to us that it meant to its contemporary readers. Thus it would make sense to re-translate Huckleberry Finn into modern language, in the same way we periodically get new translations of Homer or Dante or Thomas Mann. This is a point that applies equally well to non-fiction and social theory: in some ways, English-speaking sociologists are lucky that our canonical trio of classical theorists–Marx, Weber, and Durkheim–all wrote in another language. The most recent translation of Capital is eminently more readable than the older ones–and I know I could have used a modern English translation of Talcott Parsons when I was studying contemporary theory.

Now, one might respond to this by saying that writing loses much in translation, and that some things just aren’t the same unless you read them in the original un-translated form. And that’s probably true. But it would still be good to establish the “English-to-English translation” as a legitimate category, since it would give us a better way of understanding things like the new altered version of Huck Finn. You would have the original Huck and the “new English translation” of Huck existing side by side; students would read the translation in high school, but perhaps they would be introduced to the original in college. We could debate whether a new translation was good or bad without getting into fruitless arguments over whether one should ever alter a classic book. And maybe it would help us all develop a more historical and contextual understanding of language and be less susceptible to the arbitrary domination of prescriptive grammarians.

Anti-Star Trek: A Theory of Posterity

December 14th, 2010  |  Published in anti-Star Trek, Art and Literature, Political Economy

In the process of trying to pull together some thoughts on intellectual property, zero marginal-cost goods, immaterial labor, and the incipient transition to a rentier form of capitalism, I’ve been working out a thought experiment: a possible future society I call anti-Star Trek. Consider this a stab at a theory of posterity.

One of the intriguing things about the world of Star Trek, as Gene Roddenberry presented it in The Next Generation and subsequent series, is that it appears to be, in essence, a communist society. There is no money, everyone has access to whatever resources they need, and no-one is required to work. Liberated from the need to engage in wage labor for survival, people are free to get in spaceships and go flying around the galaxy for edification and adventure. Aliens who still believe in hoarding money and material acquisitions, like the Ferengi, are viewed as barbaric anachronisms.

The technical condition of possibility for this society is comprised of of two basic components. The first is the replicator, a technology that can make instant copies of any object with no input of human labor. The second is an apparently unlimited supply of free energy, due to anti-matter reactions or dilithium crystals or whatever. It is, in sum, a society that has overcome scarcity.

Anti-Star Trek takes these same technological premises: replicators, free energy, and a post-scarcity economy. But it casts them in a different set of social relations. Anti-Star Trek is an attempt to answer the following question:

  • Given the material abundance made possible by the replicator, how would it be possible to maintain a system based on money, profit, and class power?

Economists like to say that capitalist market economies work optimally when they are used to allocate scarce goods. So how to maintain capitalism in a world where scarcity can be largely overcome? What follows is some steps toward an answer to this question.

Like industrial capitalism, the economy of anti-Star Trek rests on a specific state-enforced regime of property relations. However, the kind of property that is central to anti-Star Trek is not physical but intellectual property, as codified legally in the patent and copyright system. While contemporary defenders of intellectual property like to speak of it as though it is broadly analogous to other kinds of property, it is actually based on a quite different principle. As the (libertarian) economists Michele Boldrin and David K. Levine point out:

Intellectual property law is not about your right to control your copy of your idea – this is a right that . . . does not need a great deal of protection. What intellectual property law is really about is about your right to control my copy of your idea. This is not a right ordinarily or automatically granted to the owners of other types of property. If I produce a cup of coffee, I have the right to choose whether or not to sell it to you or drink it myself. But my property right is not an automatic right both to sell you the cup of coffee and to tell you how to drink it.

This is the quality of intellectual property law that provides an economic foundation for anti-Star Trek: the ability to tell others how to use copies of an idea that you “own”. In order to get access to a replicator, you have to buy one from a company that licenses you the right to use a replicator. (Someone can’t give you a replicator or make one with their replicator, because that would violate their license). What’s more, every time you make something with the replicator, you also need to pay a licensing fee to whoever owns the rights to that particular thing. So if the Captain Jean-Luc Picard of anti-Star Trek wanted “tea, Earl Grey, hot”, he would have to pay the company that has copyrighted the replicator pattern for hot Earl Grey tea. (Presumably some other company owns the rights to cold tea.)

This solves the problem of how to maintain for-profit capitalist enterprise, at least on the surface. Anyone who tries to supply their needs from their replicator without paying the copyright cartels would become an outlaw, like today’s online file-sharers. But if everyone is constantly being forced to pay out money in licensing fees, then they need some way of earning money, and this brings up a new problem. With replicators around, there’s no need for human labor in any kind of physical production. So what kind of jobs would exist in this economy? Here are a few possibilities.

  1. The creative class. There will be a need for people to come up with new things to replicate, or new variations on old things, which can then be copyrighted and used as the basis for future licensing revenue. But this is never going to be a very large source of jobs, because the labor required to create a pattern that can be infinitely replicated is orders of magnitude less than the labor required in a physical production process in which the same object is made over and over again. What’s more, we can see in today’s world that lots of people will create and innovate on their own, without being paid for it. The capitalists of anti-Star Trek would probably find it more economical to simply pick through the ranks of unpaid creators, find new ideas that seem promising, and then buy out the creators and turn the idea into the firm’s intellectual property.

  2. Lawyers. In a world where the economy is based on intellectual property, companies will constantly be suing each other for alleged infringements of each others’ copyrights and patents. This will provide employment for some significant fraction of the population, but again it’s hard to see this being enough to sustain an entire economy. Particularly because of a theme that will arise again in the next couple of points: just about anything can, in principle, be automated. It’s easy to imagine big intellectual property firms coming up with procedures for mass-filing lawsuits that rely on fewer and fewer human lawyers. On the other hand, perhaps an equilibrium will arise where every individual needs to keep a lawyer on retainer, because they can’t afford the cost of auto-lawyer software but they must still fight off lawsuits from firms attempting to win big damages for alleged infringment.

  3. Marketers. As time goes on, the list of possible things you can replicate will only continue to grow, but people’s money to buy licenses–and their time to enjoy the things they replicate–will not grow fast enough to keep up. The biggest threat to any given company’s profits will not be the cost of labor or raw materials–since they don’t need much or any of those–but rather the prospect that the licenses they own will lose out in popularity to those of competitors. So there will be an unending and cut-throat competition to market one company’s intellectual properties as superior to the competition’s: Coke over Pepsi, Ford over Toyota, and so on. This should keep a small army employed in advertizing and marketing. But once again, beware the spectre of automation: advances in data mining, machine learning and artificial intelligence may lessen the amount of human labor required even in these fields.

  4. Guard labor. The term “Guard Labor” is used by the economists Bowles and Jayadev to refer to:

    The efforts of the monitors, guards, and military personnel . . . directed not toward production, but toward the enforcement of claims arising from exchanges and the pursuit or prevention of unilateral transfers of property ownership.

    In other words, guard labor is the labor required in any society with great inequalities of wealth and power, in order to keep the poor and powerless from taking a share back from the rich and powerful. Since the whole point of anti-Star Trek is to maintain such inequalities even when they appear economically superfluous, there will obviously still be a great need for guard labor. And the additional burden of enforcing intellectual property restrictions will increase demand for such labor, since it requires careful monitoring of what was once considered private behavior. Once again, however, automation looms: robot police, anyone?

These, it seems to me, would be the main source of employment in the world of anti-Star Trek. It seems implausible, however, that this would be sufficient–the society would probably be subject to a persistent trend toward under-employment. This is particularly true given that all the sectors except (arguably) the first would be subject to pressures toward labor-saving technological innovation. What’s more, there is also another way for private companies to avoid employing workers for some of these tasks: turn them into activities that people will find pleasurable, and will thus do for free on their own time. Firms like Google are already experimenting with such strategies. The computer scientist Luis von Ahn has specialized in developing “games with a purpose”: applications that present themselves to end users as enjoyable diversions, but which also perform a useful computational task. One of von Ahn’s games asked users to identify objects in photos, and the data was then fed back into a database that was used for searching images. It doesn’t take much imagination to see how this line of research could lead toward the world of Orson Scott Card’s novel Ender’s Game, in which children remotely fight an interstellar war through what they think are video games.

Thus it seems that the main problem confronting the society of anti-Star Trek is the problem of effective demand: that is, how to ensure that people are able to earn enough money to be able to pay the licensing fees on which private profit depends. Of course, this isn’t so different from the problem that confronted industrial capitalism, but it becomes more severe as human labor is increasingly squeezed out of the system, and human beings become superfluous as elements of production, even as they remain necessary as consumers.

Ultimately, even capitalist self-interest will require some redistribution of wealth downward in order to support demand. Society reaches a state in which, as the late André Gorz put it, “the distribution of means of payment must correspond to the volume of wealth socially produced and not to the volume of work performed”. This is particularly true–indeed, it is necessarily true–of a world based on intellectual property rents rather than on value based on labor-time.

But here the class of rentier-capitalists will confront a collective action problem. In principle, it would be possible to sustain the system by taxing the profits of profitable firms and redistributing the money back to consumers–possibly as a no-strings attached guaranteed income, and possibly in return for performing some kind of meaningless make-work. But even if redistribution is desirable from the standpoint of the class as a whole, any individual company or rich person will be tempted to free-ride on the payments of others, and will therefore resist efforts to impose a redistributive tax. Of course, the government could also simply print money to give to the working class, but the resulting inflation would just be an indirect form of redistribution and would also be resisted. Finally, there is the option of funding consumption through consumer indebtedness–but this merely delays the demand crisis rather than resolving it, as residents of the present know all too well.

This all sets the stage for ongoing stagnation and crisis in the world of anti-Star Trek. And then, of course, there are the masses. Would the power of ideology be strong enough to induce people to accept the state of affairs I’ve described? Or would people start to ask why the wealth of knowledge and culture was being enclosed within restrictive laws, when “another world is possible” beyond the regime of artificial scarcity?

Marx’s Theory of Alien Nation

December 10th, 2010  |  Published in Art and Literature, Social Science, Socialism

Charles Stross hits another one out of the park today. The post attempts to explain the widespread sentiment that the masses are politically powerless: “Voting doesn’t change anything — the politicians always win.” Stross advances the thesis that we have been disempowered by the rise of the corporation: first legally, when corporations were recognized as persons, and then politically, when said corporations captured the democratic process through overt and subtle forms of corruption and bribery.

Playing off the notion of corporations as “persons”, Stross portrays the corporation as a “hive organism” which does not share human priorities; corporations are “non-human entities with non-human goals”, which can “co-opt” CEOs or politicians by rewarding them financially. The punchline to the argument is that:

In short, we are living in the aftermath of an alien invasion.

I like this argument a lot, but it seems to me that it’s less an argument about the corporation as such than an argument about capitalism. Indeed, Marx spoke about capitalism in remarkably similar terms. He notes that the underlying dynamic of capitalism is M-C-M’: the use of money to produce and circulate commodities solely for the purpose of accumulating more capital. Money itself is the agent here, not any person. This abstract relationship is more fundamental than the the relations between actual people–capitalists and workers–whose actions are dictated by the exigencies of capital accumulation. From Capital, chapter four:

The circulation of money as capital is, on the contrary, an end in itself, for the expansion of value takes place only within this constantly renewed movement. The circulation of capital has therefore no limits.

As the conscious representative of this movement, the possessor of money becomes a capitalist. His person, or rather his pocket, is the point from which the money starts and to which it returns. The expansion of value, which is the objective basis or main-spring of the circulation M-C-M, becomes his subjective aim, and it is only in so far as the appropriation of ever more and more wealth in the abstract becomes the sole motive of his operations, that he functions as a capitalist, that is, as capital personified and endowed with consciousness and a will.

According to Marx, the alien invasion hasn’t just co-opted its human agents but actually corrupted and colonized their minds, so that they come to see the needs of capital as their own needs. Thus the workers find themselvs exploited and alienated, not fundamentally by capitalists but by the alien force, capital, which uses the workers only to reproduce itself. From chapter 23:

The labourer therefore constantly produces material, objective wealth, but in the form of capital, of an alien power that dominates and exploits him; and the capitalist as constantly produces labour-power, but in the form of a subjective source of wealth, separated from the objects in and by which it can alone be realised; in short he produces the labourer, but as a wage labourer. This incessant reproduction, this perpetuation of the labourer, is the sine quâ non of capitalist production.

This, incidentally, is why Maoists like The Matrix.

Moishe Postone makes much of this line of argument in his brilliant Time, Labor, and Social Domination. He emphasizes (p. 30) the point that:

In Marx’s analysis, social domination in capitalism does not, on its most fundamental level, consist in the domination of people by other people, but in the domination of people by abstract social structures that people themselves constitute.

Therefore,

the form of social domination that characterizes capitalism is not ultimately a function of private property, of the ownership by the capitalists of the surplus product and the means of production; rather, it is grounded in the value form of wealth itself, a form of social wealth that confronts living labor (the workers) as a structurally alien and dominant power.

Since the “aliens” are of our own making, the proper science fiction allegory isn’t an extraterrestrial invasion but a robot takeover, like the Matrix or Terminator movies. But close enough.

So in light of my last post, does this make Capital an early work of science fiction? Or does it make contemporary science fiction the leading edge of Marxism? Both, I’d like to think.

Social Science Fiction

December 8th, 2010  |  Published in Art and Literature, Social Science

Henry Farrell has a nice discussion of some recent debates about steampunk novels. He refers to Charles Stross’s complaint that much steampunk is so infatuated with gadgets and elites that it willfully turns away from the misery and exploitation that characterized real Victorian capitalism. He also approvingly notes Cosma Shalizi’s argument that “The Singularity has happened; we call it ‘the industrial revolution’”. Farrell builds on this point by noting that “one of the skeins one can trace back through modern [science fiction] is a vein of sociological rather than scientific speculation, in which events happening to individual characters serve as a means to capture arguments about what is happening to society as a whole”. The interesting thing about the 19th century, then, is that it is a period of rapid social transformation, and SF is an attempt to understand the implications of such rapid change. In a similar vein, Patrick Neilsen Hayden quotes Nietzsche: “The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw.”

This relates to some of my own long-gestating speculations about the relationship between science fiction and social science. My argument is that both fields can be understood as projects that attempt to understand empirical facts and lived experience as something which is shaped by abstract–and not directly perceptible–structural forces. But whereas social science attempts to derive generalities about society from concrete observations, SF derives possible concrete events from the premise of certain sociological generalities. Note that this definition makes no reference to the future or the past: science fiction can be about the past, like steampunk, but it is the working out of an alternative past, which branches off from our own timeline according to clearly differences in social structure and technology. If social science is concerned with constructing a model (whether quantitative or qualitative) on the basis of data, then we can think of a science-fictional world by analogy to a prediction from an existing model, such as a fitted statistical model: any particular point prediction reflects both the invariant properties of the model’s parameters and the uncertainty and random variation that makes individual cases idiosyncratic.

The following are a few semi-related musings on this theme.

I. The Philosophy of Posterity

One kind of sociologically-driven science fiction is the working out of what I will call a theory of posterity. Posterity, here, is meant to imply the reverse of history. And a theory of posterity, in turn, is an inversion of the logic of a theory of history, or of the logic of social science more generally.

History is a speculative enterprise in which the goal is to construct an abstract conception of society, derived from its concrete manifestations. That is, given recorded history, the historian attempts to discern the large, invisible social forces that generated these events. It is a process of constructing a story about the past, or as Benjamin puts it:

To articulate what is past does not mean to recognize “how it really was.” It means to take control of a memory….

Or consider Benjamin’s famous image of the “angel of history”:

His face is turned towards the past. Where we see the appearance of a chain of events, he sees one single catastrophe, which unceasingly piles rubble on top of rubble and hurls it before his feet. He would like to pause for a moment so fair [verweilen: a reference to Goethe’s Faust], to awaken the dead and to piece together what has been smashed. But a storm is blowing from Paradise, it has caught itself up in his wings and is so strong that the Angel can no longer close them. The storm drives him irresistibly into the future, to which his back is turned, while the rubble-heap before him grows sky-high.

One way to read this is that the pile of rubble is the concrete accumulation  of historical events, while the storm represents the social forces–especially capitalism, in Benjamin’s reading–which drive the logic of events.

But consider what lies behind the angel of history: the future. We cannot know what, concretely, will happen in the future. But we know about the social forces–the storm–which are pushing us inexorably into that future. Herein lies the distinction between the study of history and the study of posterity: a theory of posterity is an attempt to turn the angel of history around, and to tell us what it sees.

Where the historian takes empirical data and historical events and uses them to build up a theory of social structure, a theory of posterity begins with existing social forces and structures, and derives possible concrete futures from them. The social scientist must pick through the collection of empirical details–whether in the form of archives, ethnographic narratives, or census datasets–and decide which are relevant to constructing a general theory, and which are merely accidental and contingent features of the record. Likewise, constructing an understanding of the future requires sorting through all the ideas and broad trends and institutions that exist today, in order to determine which will have important implications for later events, and which will be transient and inconsequential.

Because it must construct the particular out of the general, the study of posterity is most effectively manifested in fiction, which excels in the portrayal of concrete detail, whereas the study of the past takes the form of social science, which is built to represent abstractions. Fictional futures are always preferable to those works of “futurism” which attempt to directly predict the future, obscuring the inherent uncertainty and contingency of that future, and thereby stultifying the reader. Science fiction is to futurism what social theory is to conspiracy theory: an altogether richer, more honest, and more humble enterprise. Or to put it another way, it is always more interesting to read an account that derives the general from the particular (social theory) or the particular from the general (science fiction), rather than attempting to go from the general to the general (futurism) or the particular to the particular (conspiracism).

Science fiction can be understood as a way of writing that adopts a certain general theory of posterity, one which gives a prominent role to science and technology, and then describes specific events that would be consistent with that theory. But that generalization conceals a great diversity of different understandings. And so to understand a work of speculative fiction, therefore, it helps to understand the author’s theory of posterity.

II. Charles Stross: the Sigmoid Curve and Punctuated Equilibrium

The work of Charles Stross provides an illuminating case study. Much of his work deals with the near-future, and thus is centrally concerned with extrapolating current social trends in various directions. His most acclaimed novel, Accelerando, is an account of “the singularity”: the moment when rapidly accelerating technological progress gives rise to incomprehensibly post-human intelligences.

Like most science fiction, Stross’s theory of posterity begins from the interaction of social structure and technology. This is rather too simple a formulation, however, as it tends to imply a sort of technological determinism, where technical developments are considered to be a process that goes on outside of society, and affects it as an external force. Closer to the spirit of Stross–and most good SF–is the following from Marx:

Technology discloses man’s mode of dealing with Nature, the process of production by which he sustains his life, and thereby also lays bare the mode of formation of his social relations, and of the mental conceptions that flow from them.

This formulation, to which David Harvey is quite partial, reveals that technology is not an independent “thing” but rather an intersection of multiple human relationships–the interchange with nature, the process of production (and, we might add, reproduction), and culture.

Stross’s theory of posterity places technology at the nexus of capital accumulation, consumer culture, and the state, in its function as the guarantor of contract and property rights. Thus in Accelerando, and also in books like Halting State, financial engineering, video games, hackers, intellectual property, and surveillance interact, and all of them push technology forward in particular directions. This is the mechanism by which Stross arrives at his ironic dystopia in which post-human intelligence takes the form of “sentient financial instruments” and “alien business models”.

In surveying this vision, a question arises about the way technological development is portrayed in any theory of posterity. It has been a common trope in science fiction to simply take present-day trends and extrapolate them indefinitely into the future, without regard for any major change in the direction of development. Stross himself has observed this tendency: in the first half of the 20th century, the most rapid technological advances came in the area of transportation. People projected this into the future, and consequently science fiction of that era tended to produce things like flying cars, interstellar space travel, etc.

The implicit model of progress that gave rise to these visions was one in which technology develops according to an exponential curve:

expcurve

The exponential model of development also underpins many popular conceptions of the technological singularity, such as that of Ray Kurzweil. As we reach the rapidly upward-sloping part of the curve, the thinking goes, technological and social change becomes so rapid as to be unpredictable and unimaginable.

But Stross observes that the exponential model probably misconstrues what technical change really looks like. In the case of transportation, he notes that the historical pattern fits a different kind of function:

We can plot this increase in travel speed on a graph — better still, plot the increase in maximum possible speed — and it looks quite pretty; it’s a classic sigmoid curve, initially rising slowly, then with the rate of change peaking between 1920 and 1950, before tapering off again after 1970. Today, the fastest vehicle ever built, NASA’s New Horizons spacecraft, en route to Pluto, is moving at approximately 21 kilometres per second — only twice as fast as an Apollo spacecraft from the late-1960s. Forty-five years to double the maximum velocity; back in the 1930s it was happening in less than a decade.

Below is the sigmoid curve:

sigcurve

It might seem as though Accelerando, at least, isn’t consistent with this model, since it looks more like a Kurzweil-style exponential singularity. But another way of looking at it is that the sigmoid curve simply plays out over a very long time scale: the middle parts of the book portray incredibly rapid changes, but by the end of the book the characters once again seem to be living in a world of fairly sedate development. This environment is investigated further in the followup Glasshouse, which pushes the singularity story perhaps as far as it will  go–to the point where it begins to lose all contact with the present, rendering further extrapolation impossible.

What’s most interesting about the sigmoid-curve interpretation of technology, however, is what it implies about the interaction between different technological sectors over the course of history. Rather than ever-accelerating progress, the history of technology now looks to be characterized by something like what paleontologists call Punctuated Equilibrium: long periods of relative stasis, interspersed with brief spasms of rapid evolution. If history works this way, then projecting the future becomes far more difficult. The most important elements of the present mix of technologies are not necessarily the most prominent ones; it may be that some currently insignificant area will, in the near future, blow up to become the successor to the revolution in Information Technology.

In a recent speech, Stross futher elaborates on this framework as it relates to present trends in technology. He goes farther than in previous work in rejecting a key premise of the singularity, which is that the exponential growth in raw computing power will continue indefinitely:

I don’t want to predict what we end up with in 2020 in terms of raw processing power; I’m chicken, and besides, I’m not a semiconductor designer. But while I’d be surprised if we didn’t get an order of magnitude more performance out of our CPUs between now and then — maybe two — and an order of magnitude lower power consumption — I don’t expect to see the performance improvements of the 1990s or early 2000s ever again. The steep part of the sigmoid growth curve is already behind us.

However, Stross notes that even as the acceleration in processor powers drops, we are seeing a distinct kind of development based on ubiqitous fast wireless Internet connections and portable computing devices like the iPhone. The consequence of this is to erode the distinction between the network and “real” world:

Welcome to a world where the internet has turned inside-out; instead of being something you visit inside a box with a coloured screen, it’s draped all over the landscape around you, invisible until you put on a pair of glasses or pick up your always-on mobile phone. A phone which is to today’s iPhone as a modern laptop is to an original Apple II; a device which always knows where you are, where your possessions are, and without which you are — literally — lost and forgetful.

This is, essentially, the world of Manfred Macx in the opening chapters of Accelerando.  It is incipient in the world of Halting State, and its further development will presumably be interrogated in that book’s sequel, Rule 34.

III. William Gibson and the Technicians of Culture

William Gibson is another writer who has considered the near future, and his picture in Pattern Recognition and Spook Country maps out a consensus future rather similar to Stross’s. In particular, the effacing of the boundary between the Internet and everyday life is ever-present in these books, right down to a particular device–the special glasses which project images onto the wearer’s environment–that plays a central role for both writers.

Yet technology for Gibson is embedded in a different social matrix. The state and its bureaucracy are less present than in Stross; indeed, Gibson’s work is redolent of 1990′s style imaginings of the globalized world, after the withering of the nation-state. Capital, meanwhile, is ever-present, but its leading edge is quite different. Rather than IP cartels or financiers or game designers, the leading force in Gibson’s world is the culture industry, and in particular advertizing and marketing.

This is in keeping with Gibson’s general affinity for, and deep intuitive understanding of, the world of consumer commodities. Indeed, his books are less about technology than they are meditations on consumer culture and its objects; the loving way in which brands and products are described reveals Gibson’s own infatuation with these commodities. Indeed, his instincts are so well tuned that an object at the center of Pattern Recognition turned out to be a premonition of an actual commodity.

This all leads logically to a theory of the future in which changes in society and technology are driven by elements of the culture industry: maverick ad executives, cool-hunters, former indie-rock stars and avant-garde artists all figure in the two recent works. Gibson maintains a conception of the high culture-low culture divide, and the complex interrelation between the two poles, which is lacking in Stross. The creation and re-creation of symbols and meaning is the central form of innovation in his stories.

Insofar as Gibson’s recent writing is the working out of a social theory, its point of departure is Fredric Jameson’s theorization of postmodern capitalist culture. Jameson observed back in the 1970′s that one of the definitive characteristics of late capitalism was that “aesthetic production today has become integrated into commodity production generally”. Gibson, like Stross and other science fiction writers, portrays the effects of rapid change in the technologies of production, but in this case it is the technologies of aesthetic production rather than the assembly line, transportation, or communication.

And it does indeed seems that cultural innovation and recombination has accelerated rapidly in the past few decades. But in light of Stross, the question becomes: are we reaching the top of the sigmoid curve? It sometimes seems that we are moving into a world where Capital is more an more concerned with extracting rents from the control of “intellectual property” rather than pushing toward any kind of historically progressive technological or even cultural innovation. But I will save the working out of that particular theory of posterity for another post.

The Future of Music

May 8th, 2010  |  Published in Art and Literature, Everyday life, Political Economy

Recently The Atlantic published a piece about how “a generation of file-sharers is ruining the future of entertainment“. The piece is pretty silly, since it conflates “the future of entertainment” with “the profitability of the major entertainment corporations”, and in particular the record industry. Marc Weidenbaum has a nice explanation of how absurd that is. But even if you believe that the profitability of these companies is somehow necessary for us to have culture, the concern for their health seems to me wildly disingenuous and misplaced. Their troubles are not a function of “freeloaders” or the evils of the Internet. They are a result of greed and an unwillingness to part with an obsolete business model–an unwillingness that has been encouraged and abetted by the state and its approach to intellectual property law.

Here’s my solution for the record companies. All they need to do is offer a service that provides:

  • Unlimited downloads of a huge selection of music from both recent years and past decades…
  • In a high-quality format…
  • With absolutely no copy protection or other Digital Rights Management…
  • For no more than $5 per month.

Why do I think this might be a success? Because it already exists. The SoulSeek network is a file-sharing service that contains a huge selection–at least for the kinds of music I tend to like. And though it’s free to use, for a $5 donation you get a month of “privileges”, which essentially put you at the front of the line when downloading from other users, which makes the whole experience much faster.

I’ve given a lot of money to SoulSeek over the past couple of years–nearly $5 a month, as it turns out. And I would have happily given that money to a similar service that gave full legal access to copyrighted downloads, and passed some of that money on to the artists. But it doesn’t exist, because the record companies still believe they can force us to pay for $12 CDs and $1 iTunes song downloads. They don’t cling to that model because it’s the only one possible, but because they’re too greedy and short-sighted to try anything else.

Of course, the record companies and their apologists would immediately claim that the model I’ve described isn’t economically viable, and they could never make enough money from it to do all the good work they supposedly do to find and develop young artists. But even at $5 a month, there’s a lot of money to be made here. If unlimited downloads at a monthly rate caught on, it could come to be something like cable TV that a large percentage of households pay for as a matter of routine. I don’t think this is all that implausible: people like music almost as much as they like TV, and what I’m proposing would be an order of magnitude cheaper than cable.

According to the cable providers’ trade assocation, there are 62.1 million basic cable subscriptions in the United States. This number of online music subscriptions, at $5 per month, would bring in around $3.7 billion of revenue. In 2005, total revenue from the sale of recorded music in the U.S. was about $4.8 billion. When you consider how much cheaper digital distribution is than manufacturing and shipping physical media, the unlimited-downloads model looks pretty competitive with traditional sales.

Now, maybe this model wouldn’t catch on in the way I’ve suggested. But if people continue to prefer buying their music a la carte, there’s no reason a subscription-based service couldn’t coexist with iTunes style pay-per-download. Unfortunately, there’s not a whole lot of incentive for the big copyright cartels to move toward the system I’ve sketched out here, because the Obama administration seems intent on using the repressive power of the state to force people into consuming media in the way the media conglomerates would prefer. Atrocities like the ACTA treaty are moving us toward a world of pervasive surveillance in which our cultural wealth is kept under lock and key for the benefit of a few wealthy copyright-holders.

In light of all this, the correct response to anyone who decries the moral perfidy of file-sharers is derisive laughter. The media companies have chosen to transition into a form of rentier capitalism that requires them to wage war on their own consumers. In that environment, it can hardly be surprising that the consumers fight back.

September 21, YDAU: Gaudeamus Igitur

September 21st, 2009  |  Published in Art and Literature

It’s been a couple of weeks  since I finished Infinite Jest, a bit ahead of schedule. Now that the Infinite Summer is officially concluded, I feel like I ought to write out some reflections on the book, after I have a bit of distance from it but before it starts to fade from my memory.

So much has been said about this book over the years, and so much has been said again on all the blogs and forums associated with the Infinite Summer project, that it’s hard to figure out a way to add much additional value. But what strikes me now about this book is something that was really catalyzed by reading it as a part of this big, virtual book group. What I find most compelling about Infinite Jest, I think, is the way it reveals the personalities and commitments of the people who read it.

Any serious novel can do that, of course, if people invest themselves seriously in it. But IJ is particularly suited to this task, for a couple of reasons. The first is the way it attempts to convey an everyday life that is recognizably ours, even if set in a hyperbolic and vaguely plausible near future: the book is about us, now, in the culture of post-modern capitalism. And because it’s such a big baggy monster, that “us” can encompass a wide range of social worlds: privileged intellectual misfits, recovering opiate addicts, political terrorists, and so on.

But the book also tends to reveal people because it is so obviously shot through with Wallace’s own passionate feelings about the way our culture revels in self-abnegation (through substances or entertainment), as well as the insights he thinks he has about how we might live better, more meaningful, more fulfilling lives. The book isn’t a parable, of course, and there’s no unambiguous moral lesson to draw from it. Yet it’s hard to escape the sense that it’s author wanted it “to make us act, and to help us live”, as Durkheim once described the purpose of religion. It’s that aspect of the work, I think, that compels people to measure it against their own moral intuitions, and to measure themselves against the book.

In sum, the book is very much a representative document of late capitalism’s Age of the Memoir: confessional, therapeutic, a bit self-absorbed, and defined by a retreat from the political, or the social, and towards the personal and the introspective. But Wallace is smart enough to recognize this retreat for what it is, and by writing fiction he escapes the memoir culture’s small-minded obsession with consistency and factuality. The result is a work of art which does far more to plumb the soul of its reader than would the work of any real-life tennis prodigy, or addict,  or Quebecois terrorist. (And lest we forget, Wallace himself was at least two of these things.)

A few examples.

Which is your favorite plot thread?

It says a lot I think, which part of the Sierpinski gasket you prefer: Hal and ETA, or Don and Ennet House, or Marathe and Steeply. To judge from the forums, Hal is the character most readers identify with, which I suppose is unsurprising. Not only is he the most fully developed character in the early parts of the story, his background (privileged) and preoccupations (intellectual, neurotic) no doubt overlap heavily with those of the book’s audience–as indeed they do with mine. But maybe that’s why I never really identified (or Identified) with Hal. He came off to me as self-involved, whiny, and above all completely unable to put his own situation in its proper context. I sort of hated him for the same reason I hate the main character in The Catcher in the Rye. In both cases, part of my revulsion is rooted, I think, in recognizing in them some unpleasant elements of my own personality.

Then there’s Don Gately. In the second half of the book, he becomes the most sympathetic character. But he is also, to me, the most intellectually challenging to the reader. Hal’s way of coping is a familiar one to the kind of educated, literary person who reads IJ: self-absorption, over-thinking, and substances. Gately, on the other hand, is Wallace’s best effort at portraying an uneducated, unintellectual person who nevertheless at least approaches being a healthy and good person. It seems at times, as though he is good because he resists self-examination–avoiding unnecessary thought being, of course, one of the cliches of AA. For a reader who is like Hal, and unlike Don, there’s no real way to completely embrace Don’s way without being either disingenuous about one’s own personality and history, or else fetishizing Don as some kind of “noble savage”. Yet we’re left with the inescapable conclusion that, if there’s any way out of the psychological traps Wallace describes, Gately is more or less it.

Then there’s the real sleeper, Marathe and Steeply. I haven’t come across anyone who says this is their favorite part of the book, and honestly it isn’t mine either. On the other hand, I did consistently find it funny and interesting, more so than the ETA sections a lot of the time. Maybe it’s because I read a lot of non-fiction and social science, so I have a high tolerance for theoretical exposition disguised as dialogue. But I also think that the writing in those segments is as richly evocative and lyrical as anything in the book, even though it’s just two guys talking to each other for hours on end. I got a kick out of simultaneously trying to picture Hugh Steeply’s absurd drag, and hear Remy Marathe’s over-the-top (and utterly non-verisimilitudinous) accent.

Wardine and yrstruly

I thought these sections were really good, and everything like that. I didn’t even find them difficult to read–once I got the hang of the dialect Wallace was going for, I could hear it in my head and the sections flowed forward quite poetically and musically.

The objections to this section seem to be of two flavors. The first, and less interesting one, comes from people who expect something different from novels than I do. Rather than an interesting challenge, they find these passages to be an affront to the reader, from an author who is more interested in making his reader work than in creating an enjoyable story. Of course, one of the major themes of the book, in my view, is that life is about much more than enjoyment, and that enjoyment can often get in the way of really living life. But that’s just what makes this reaction to these sections interesting, since others obviously didn’t interpret the book the same way I did.

The second objection stems, it seems, from a kind of political correctness, from people who find the attempt to evoke some form of African-American Vernacular English to be patronizing or offensive, a kind of minstrelsy.  This strikes me as a misreading. For one thing, there’s no reason to believe that Wallace was aiming for an accurate reproduction of any existing dialect. But more importantly, I suspect that the people accusing Wallace of being patronizing are really projecting their own prejudices about language, which I discussed in a previous post: namely, that the speech of poor people, or uneducated people, or black people, is in some “wrong” relative to the speech of people like David Foster Wallace. I also think it has to do with liberal uneasiness about race. It’s telling to me that some people apparently would have preferred Wallace to simply ignore the existence of black people, rather than trying–and maybe failing!–to represent some black people as part of his broader tableau. I didn’t see the same kind of hand-wringing about Don Gately, after all, even though Wallace’s childhood was a pretty long way away from Bimmy’s.

The Incredible Randy Lenz

For my money, Randy Lenz is by far the most interesting of the minor characters. He is totally repulsive, of course. On that absolutely everyone seems to agree. But the way people react to him can be tremendously revealing: the Infinite Summer forum thread about Lenz is a captivating display of this.

One immediate divide concerns Lenz’s animal torture: some people find it unspeakably horrible, more so than just about anything else in the book, to the point that they find themselves driven away from the book by it. Others (like me) don’t see how Lenz’s works can be worse than some of the horrible things that happen to human beings in the novel; we worry about a tendency to privilege the lives of “innocent” animals over those of less pure-seeming humans. For me, Lenz’s total and utter debasement wasn’t really driven home until the last scene he appears in, where we find him cutting off Poor Tony’s fingers in the hope of appeasing the AFR and getting another look at the Entertainment.

The other interesting thing about Lenz is the way he brings out people’s Manichaean tendencies. Most of the other characters in the books seem to provoke some level of sympathy from readers. Indeed, the way a character like Gately is written seems calculated to make you care about and root for him, despite the fact that he was directly responsible for more than one man’s death and allowed another to die because of his cowardice and addiction. But Lenz is an exception: a lot of the discussion of him takes for granted that he is some kind of pure evil, in contrast to the complex and conflicted characters that populate the rest of the book.

This, to me, is not the point of Lenz’s character at all. I see him as a necessary complement to the Gately character, a man who has to be seen as occupying the same continuum of addiction as all the other Ennet House residents. If we only had Gately, the story would be too simplistically uplifting: it doesn’t matter that your mom was a drunk who got beaten everyday, or that you’re a serious Demerol addict with no real prospects–just Take It One Day At A Time and you’ll be OK! The point of Lenz, it seems to me, is that not everyone escapes from the various personal and social traumas that lead us to destroy ourselves. Not because they are weak or evil people, but because the self-destructive force of addiction and trauma are so great. In a different way, Poor Tony is an example of this too–and I don’t think it’s an accident that he and Lenz are together in their last scene. To put it another way, it seems to me that the only way to fully sympathize with, and Identify with, Don Gately, the only way to fully appreciate the difficulty of his struggle, is to recognize that any of us could, under a certain set of circumstances, be pulled as low as Randy Lenz.

Infinite Jest isn’t the best book I’ve ever read, but it is one that will stay with me. I was going to write that I found it more emotionally affecting than other things I’ve read lately, but that isn’t quite right. What it is, is emotionally challenging. That’s what really makes it a different kind of “big book” from, say, Gravity’s Rainbow or Ulysses, which are primarily intellectual challenges.  This is a book that really made it hard to maintain a pose of emotional detachment or ironic distance–which, in these times, is a real achievement.