Everyday life

Gentrification and Racial Arbitrage

June 2nd, 2014  |  Published in Everyday life, Political Economy, Politics

This post spins out something that occurred to me in the course of writing about consumerist politics and its limitations. One of the sections concerns gentrification, and the political dead end of blaming it on what Anthony Galuzzo called “the fucking hipster show”.

Artists, students, and others classified as “hipsters” are often blamed for gentrification, rather than being understood as people who are often driven into poorer and browner neighborhoods by large-scale processes rooted in capital accumulation and government policy. This creates a divisive cultural distraction from the need to organize neighborhoods across race and class lines. I go into that in more detail in the forthcoming essay. But I had an odd thought about the racist dimension of gentrification that didn’t fit in there.

Racism is a central, unavoidable component of the whole process of gentrification in places like the United States. Landlords in non-white areas perceive that if they can bring white people into a neighborhood, they will attract more people like them. At first, the newcomers may be the low-income hipster types, but they are the pioneers who make the area safe for colonization by the rich. The ultimate outcome is that the non-white residents get priced out and displaced, along with the original gentrifiers. It’s a process that’s been repeated so many times in recent decades that that it barely needs explaining anymore.

But it occurred to me is that the first wave of white gentrifiers are engaging in what we might call, by analogy with finance, a kind of racial arbitrage. Arbitrage is the practice of exploiting differences in prices for the same good in different markets. When such discrepancies appear, it can be possible to make risk-free money by buying out of one market and immediately selling into another.

Early gentrifiers aren’t engaging in arbitrage in this strict sense; the gains that go to early home-buyers, for instance, are consequences of the unfolding of the gentrification dynamic itself and not of some market imperfection in static comparison. But in the early stages, racism gives rise to a situation where the perception of certain neighborhoods diverges from their lived reality. A white person who notices this can exploit it to procure housing at a discount.

This is primarily because, all things being equal, white people perceive a neighborhood as having more crime the more black people it has in it. Blacks are, in fact, more likely to live in high crime areas, but white perceptions go beyond this reality (see the linked paper for a detailed study). A white person who knows this will realize that an apartment in a black neighborhood will be systematically cheaper than the same apartment in a white neighborhood. By renting in the black neighborhood, whitey gets a discount without actually facing any additional danger.

The size of this discount is magnified by a second aspect of white racism about black crime. This one relates not to how much crime there is, but to what drives crime, and in particular violent crime. Many white people believe that rather than having a rational basis, violence in black neighborhoods is driven by some kind of cultural pathology or inherent animalistic nature. We therefore come to believe that mere proximity to black people puts us in danger.

This is illustrated in the recent, excellent debate between Ta-Nehisi Coates and Jonathan Chait. (Excellent on Coates’ side, that is. Chait’s contribution consisted of digging himself into a hole, then calling in a backhoe.) Chait, like many white liberals, tends fall back on nebulous ideas of black cultural pathology to explain why black people face higher levels of violence and poverty. The primary difference between people like Chait and his conservative counterparts is Chait’s magnanimous acknowledgment that black pathology stems from the legacy of slavery rather than inherent inferiority.

Coates demolishes this whole patronizing and misbegotten enterprise. Drawing on his own experiences growing up in Baltimore, he shows how violence and machismo can be understandable and even necessary ways of surviving in a tough environment. “If you are a young person living in an environment where violence is frequent and random, the willingness to meet any hint of violence with yet more violence is a shield.”

But white gentrifiers moving into black neighborhoods don’t face anything like this same environment of violence. For one thing, a major source of random violence in black communities is the police, who certainly don’t treat white newcomers the same way. For another, these newcomers are disconnected from the social networks, and the legal and illegal economies, on which many urban residents depend for survival, but which can also be suffused with violence. Certainly, white gentrifiers may be subject to property crime if they are perceived as rich or as easy marks. But the notion that they face the same murder rate as their black neighbors is simply preposterous. (For women, of course, there is an additional set of concerns about safety. But here, too, there can be an overestimation of the likelihood of being raped by a strange black man rather than the pleasant-seeming friend who might even claim socialist politics.)

Nevertheless, when I’ve mentioned the possibility of moving to a high-crime, predominantly black neighborhood, I’ve heard jokes—even from leftist comrades—along the lines of “heh, only if you want to get shot”. These are, presumably, people I won’t have to compete with for an apartment. Hence the racist perceptions of crime’s sources and targets drives down rents further and compounds the racial arbitrage.

The anti-racism of the early arrivals, then, is what helps start the whole process of revaluation and displacement. There’s an almost absurd quality to it: white supremacy is so pervasive, and its structural mechanisms so powerful, that even white anti-racist consciousness can be a mechanism for reinforcing white supremacy. It’s an important lesson that shows why anti-racism isn’t just about purifying what’s in our hearts or our heads. It’s about transforming the economic systems and property relations that continue to reproduce racist practices and ideas.

In Defense of Soviet Waiters

February 5th, 2013  |  Published in Everyday life, Political Economy, Socialism, Work

There’s been a bit of a discussion about affective labor going around. Paul Myerscough in the London Review of Books describes the elaborate code with which the Pret a Manger chain enforces an ersatz cheerfulness and dedication on the part of its employees, who are expected to be “smiling, reacting to each other, happy, engaged”. Echoing a remark of Giraudoux and George Burns, the most important thing to fake is sincerity: “authenticity of being happy is important”.

Tim Noah and Josh Eidelson elaborate on this theme, and Sarah Jaffe makes the point that this has always been an extremely gendered aspect of labor (waged and otherwise). She notes that “women have been fighting for decades to make the point that they don’t do their work for the love of it; they do it because women are expected to do it.” Employers, of course, would prefer equality to be established by imposing the love of work on both genders.

Noah describes the way Pret a Manger keeps “its sales clerks in a state of enforced rapture through policies vaguely reminiscent of the old East German Stasi”. I was reminded of the Soviet model too, but in a different way. I’m just old enough to remember when people talked about the Communist world as a really-existing place rather than a vaguely-defined bogeyman. And one of the mundane tropes that always came up foreign travelogues from behind the Iron Curtain concerned the notoriously surly service workers, in particular restaurant waiters. A 1977 newspaper headline reads “Soviet Union Takes Hard Look At Surly Waiters, Long Lines”. In a 1984 dispatch in the New York Times, John Burns reports that “faced with inadequate supplies, low salaries and endless lines of customers, many Russians in customer-service jobs lapse into an indifference bordering on contempt.”

One can find numerous explanations of this phenomenon, from the shortcomings of the planned economy to the institutional structure of the Soviet service industry to the vagaries of the Russian soul to the legacy of serfdom. But one factor was clearly that Soviet workers, unlike their American counterparts, were guaranteed jobs, wages, and access to essential needs like housing, education, and health care. The fear that enforces fake happiness among capitalist service workers—culminating in the grotesquery of Pret a Manger—was mostly inoperative in the Soviet Union. As an article in the Moscow Times explains:

During the perestroika era, the American smile was a common reference point when the topic of rude Soviet service was discussed. In an often-quoted exchange that took place on a late-1980s television talk show, one participant said, “In the United States, store employees smile, but everyone knows that the smiles are insincere.” Another answered, “Better to have insincere American smiles than our very sincere Soviet rudeness!”

With the collapse of the USSR and the penetration of Western capital into Russia, employers discovered a workforce that adapted only reluctantly to the norms of capitalist work discipline. A 1990 article in USA Today opens with a description of the travails facing the first Pizza Hut in the Soviet Union:

To open the first Pizza Hut restaurants in the Soviet Union, U.S. managers had to teach Soviet workers how to find the ”you” in U.S.S.R.

”We taught them the concept of customer service,” says Rita Renth, just back from the experience. ”Things that come naturally to employees here we had to teach them to do: -smiling, interacting with customers, eye contact.”

In no time, however, the managers hit on what I’ve described as the third wave form of the work ethic. Rather than appealing to religious salvation or material prosperity, workers are told that they should find their drudgery intrinsically enjoyable:

The five U.S. managers – and colleagues from Pizza Huts in the United Kingdom, Belgium, Australia and other nations – spent 12 to 14 hours a day drilling the Russians on service and food preparation, Pizza Hut style.

As a way of ”motivating them to be excited about what they were doing, we made (tasks) like folding boxes into a contest,” Rae says. ”When they finished, they said they couldn’t believe they would ever have fun at their jobs.”

That feeling, rare in Soviet workplaces, has been noticed. ”A comment made by a lot of customers was that as soon as they walked in, they sensed a feeling of warmth,” Rae says.

It’s the Pret a Manger approach to enforced cheerfulness (which had better be authentic!), combined with gamification, 1990-style. Along the same lines is this blog post from a business school professor, who recounts the experience of the first Russian McDonald’s:

After several days of training about customer service at McDonald’s, a young Soviet teenager asked the McDonald’s trainer a very serious question: “Why do we have to be so nice to the customers? After all, WE have the hamburgers, and they don’t!”

True enough. But while they may have had the hamburgers, with the collapse of Communism they no longer had steady access to the means of payment.

The brusqueness of customer service interactions has typically been interpreted as an indication of Communism’s shortcomings, their low quality understood as a mark of capitalism’s superiority. And it does indicate a contradiction of the Soviet model, which preserved the form of wage labor while removing many of the disciplinary mechanisms—the threat of unemployment, of destitution—that force workers to accept the discipline of the employer or the customers. That contradiction comes to a head in a restaurant where both employees and customers are miserable. As the old saying goes, “they pretend to pay us, and we pretend to work”.

In his recent essay, Seth Ackerman cautions that present-day socialists shouldn’t overlook the material shortcomings of the planned economies, and he notes that “the shabbiness of consumer supply was popularly felt as a betrayal of the humanistic mission of socialism itself”. But service work is a bit different from the kind of material shabbiness he discusses, since the product and the worker are inseparable. To demand what we’ve come to think of as “good service” is ultimately to demand the kind of affective—and affected—labor that we see throughout the service industry and especially in female-gendered occupations. Paul Myerscough is clearly unsettled by a system in which, “To guard against the possibility of Pret workers allowing themselves to behave even for a moment as if they were ‘just here for the money’, the company maintains a panoptical regime of surveillance and assessment.” But 30 years ago, journalists like Myerscough were the sort of people grousing about rude Moscow waiters.

In a system based on wage labor (or its approximation), the choice between company-enforced cheerfulness or authentic resentment is unavoidable. In other words, fake American smiles or sincere Soviet rudeness. The customer service interaction under capitalism can hardly avoid the collision between fearful resentment and self-deluding condescension, of the sort Tim Noah enacts in his opening: “For a good long while, I let myself think that the slender platinum blonde behind the counter at Pret A Manger was in love with me.” Perhaps it’s time to look back with a bit of nostalgia on the surly Communist waiters of yore, whose orientation toward the system was at least transparent.

I have argued many times that the essence of the social democratic project—and for the time being, the socialist project as well—is the empowerment of labor. By means of full employment, the separation of income from employment, and the organization of workers, people gain the ability to resist the demands of the boss. But the case of affective labor is another example that shows why this supposedly tepid and reformist project is ultimately radical and unstable. Take away the lash of the boss, and you are suddenly forced to confront service employees as human beings with human emotions, without their company-supplied masks of enforced good cheer. Revealing the true condition of service work can be a de-fetishizing experience, one just as jarring—and quite a bit closer to home—than finding out how your iPhone was manufactured. In both cases, we are made to confront unpleasant truths about the power relations that structure all of our experiences as consumers.

De-commodification in Everyday Life

June 7th, 2011  |  Published in Everyday life, Socialism, Time, Work

In his influential treatise on the modern welfare state, The Three Worlds of Welfare Capitalism, Gøsta Esping-Andersen proposed that one of the major axes along which different national welfare regimes varied was the degree to which they de-commodified labor. The motivation for this idea is the recognition–going back to Marx–that under capitalism people’s labor-power becomes a commodity, which they must sell on the market in order to earn the means of supporting themselves.

Following Karl Polanyi, Esping-Andersen describes the de-commodification of labor as the situation in which “a service is rendered as a matter of right, and when a person can maintain a livelihood without reliance on the market” (p. 22). So long as the society remains a capitalist one, it is never possible for labor to be totally de-commodified, for in that circumstance there would be nothing to compel workers to go take a job working for someone else, and capital accumulation would grind to a halt. However, insofar as there are programs like unemployment protection, socialized medicine, and guaranteed income security in retirement–and insofar as eligibility for these programs is close to universal–we can say that labor has been partially de-commodified. On the basis of this argument, Esping-Andersen differentiates those welfare regimes that are highly decommodifying (such as the Nordic countries) from those in which workers are still much more dependent on the market (such as the United States).

In the lineage of comparative welfare state research following Esping-Andersen, de-commodification is generally discussed in the way I’ve just presented it: in terms of the state’s role in either forcing people into the labor market, or allowing them to survive outside of it. However, from the standpoint of the worker, we can think of the de-commodifying welfare state as giving people a choice about whether or not to commodify their labor, rather than forcing them to sell their labor as would be the case in the absence of any welfare-state institutions. The choice that is involved here is not merely about income. It ultimately comes down to how we want to organize our time, and how we want to structure our relations with other people.

What is ultimately at stake here is not merely the commodification of labor-power, but the commodification of all areas of social life. For based on the institutions that exist and the choices people make within them, we can imagine multiple social equilibria. Social life could be highly commodified: everyone performs labor for others in return for a wage, and also pays others to perform social functions which they don’t have time for. But we could have a much lower level of commodification where people work less, because their cost of living is much lower: they are able to satisfy many of their personal needs without spending money.

To elucidate this point, consider a simplified thought experiment. Suppose you and I live in adjacent apartments. Now consider the following ways in which we might satisfy two of our needs: food and a clean habitat.

In scenario A, I cook my own meals and clean my own bathroom, and you do the same for yourself.

In scenario B, you pay me to cook your meals, and I pay you to clean my bathroom.

In scenario C, I pay you to cook for me and clean my bathroom, and you pay me to cook your meals and clean your bathroom.

This hypothetical is a bit silly, since with only two people involved we could just barter the trade in services rather than paying each other money. But in a more complex economy with many people paying each other for things, the medium of exchange becomes necessary, so I leave that element in place even in this simplified example.

What might make each of these three scenarios desirable?

The advantage of scenario A is that each of us has maximal control over our labor and our lives. I cook and clean when I choose, I eat just what I like, and I will do just enough cleaning to ensure that the bathroom meets my standards of cleanliness.

The advantage of scenario B is that it might be more efficient, if each of us has what economists call “comparative advantage” in one of the tasks. If I’m a better cook, but you’re better at cleaning, then each of us ends up with overall better meals and cleaner bathrooms than we would have had otherwise. The downside, however, is that each of us has now partly alienated our labor to some degree. I have to monitor you to make sure that you’re doing a complete job of cleaning, and you can boss me around if you dislike my food or I don’t have dinner ready on time. What’s more, the only way for this exchange to be fair to both of us is in the unlikely event that you enjoy cleaning the bathroom just as much as I like cooking. In the more likely case that both of us find cleaning much less pleasant than cooking, you get a raw deal.

Scenario C would seem to combine the worst elements of the other two scenarios. There is no efficiency gain, since we are both performing both tasks. And our labor is maximally alienated, since we are doing all our cooking and cleaning at someone else’s command rather than for ourselves.

The point of these examples is that they represent different visions of how the economy might work. Scenario A is the one I sympathize with, and it’s one that motivates many socialists, feminists, social democrats, and advocates of shorter working hours and less consumerist ways of living. Scenario B is more like the traditional vision of 20th century liberal capitalism: by commodifying more of social life, we increase our material abundance but at the expense of living alienated lives as commodified labor.

However, I would argue that a lot of political and economic discourse in the United States is actually dominated by the third scenario, which sees commodification as a good in itself, irrespective of its efficiency or its effect on our working lives. I said above that scenario C didn’t have anything to recommend it, but this is not exactly true. For in a society where labor-power is still commodified and people are dependent on the labor market, it is essential that we constantly create new jobs for people to perform–otherwise, you end up with mass unemployment just like we’re seeing right now. Scenario C is the one that maximizes job-creation and GDP growth, even though it is by no means obvious that it is the scenario that maximizes human happiness and satisfaction.

It’s worth belaboring this point precisely because so many liberals and even leftists take the “high-commodification” equilibrium for granted. Take the example of Matt Yglesias of the Center for American Progress. I write about him often (and once got Yglesiassed in return) because in many ways I find him a more congenial thinker than a lot of more traditional “leftists” who seem trapped in nostalgia for mid-20th century industrial capitalism. But the issue of commodification gets at a core area where we see the world differently.

Yglesias often writes about the fact that industrial employment is inevitably declining for technological reasons, and hence services are bound to make up an increasing share of the employment. This motivates some of his other hobby-horses, such as his crusade against occupational licensing, which he sees as an impediment to creating these needed service jobs. Now, I have no particular attachment to occupational licensing, and on the issue of manufacturing and industrial employment, Yglesias and I are basically in agreement. Where we disagree is in seeing the best future trajectory of the economy as one in which people perform more and more services for each other, for pay. For example:

[T]his is why I’ve been saying that yoga instructors have the job of the future. Nothing in these trends suggests that the actual quantity of janitors is going to increase in the future. If anything, falling demand for office workers implies that the future can have fewer. So is the future a smallish number of wealthy office workers served by an “aristocracy of labor” of unionized janitors awash in a pool of unemployed people enjoying free health care? Presumably not. The people of the future will be richer than the people of today, and therefore will more closely resemble annoying yuppies. Nicer restaurants are more labor-intensive than cheap ones, and the further up the scale you go the more specialized skills (think sommelier) come into play. Annoying yuppies take yoga classes, or even hire personal trainers. Artisanal cheese is more labor-intensive to produce than industrial cheese. More people will hire interior designers and people will get their kitchens redone more often. There will be more personal shoppers and more policemen. People will get fancier haircuts.

It’s easy to mock the idea that the future economy will be based entirely on giving each other haircuts and yoga instruction. But my objection is not that this is implausible–I think it’s entirely plausible, and such a world could even feature a relatively egalitarian income distribution, depending on the bargaining power of labor and the intervention of the state. The real question, I think, is whether this is the only way for things to turn out–that is, is it really true that the yuppie that is richer only shows, to the less rich, the image of their own future? And if not, is it the most desirable outcome?

I don’t want to pre-judge this choice so much as just argue that it is a choice. Whether we end up in a low-commodification, low cost of living scenario A or a high-commodification, high cost of living scenario C will be the result of an interaction between the state and other institutions and individual choices within those institutions. It is thus both a political and a cultural question. Even now, not every country resolves these questions in the same way. In the Netherlands, for example, both incomes and working hours are lower than in the United States, and a good argument can be made that the well-being of the Dutch is at least as high as our own.

This is why I think that the politics of de-commodification in the 21st century will be closely linked to the politics of time.

The Internet is not a Place (any more)

March 22nd, 2011  |  Published in Everyday life, Politics

Easily the most tiresome conversation that has resulted from the Arab revolutions of 2011 is the argument about whether these uprisings are “Twitter revolutions” or “Facebook revoutions” or whatever. On the one hand, you have lots of mainstream media organizations playing up the importance of social networks as some sort of spontaneous revolution-fuel, while ignoring the long years of organization that went into, say, the Egypt uprising. And then you have people arguing that actually, Internet communication isn’t really a good basis for political organizing, or that it will become a tool of authoritarian governments, or that Twitter is trapping us all in a neo-liberal feedback loop of circulating affects.

Today I saw this silly op-ed, about how “the tweet will never replace the street”. This is an absurd straw target, of course; as NPR media strategist Andy Carvin remarks (on Twitter!), “Why is so hard to get that many revolutionaries in the mideast simply don’t separate their online lives from their offline ones?”

This really gets at what I find to be the fundamental irrelevance of these debates: they ultimately depend on a questionable metaphor. They all proceed as though “the Internet” and “the Real World” were clearly separate spaces. That underlying metaphor of the Internet as a separate social space goes back at least to William Gibson’s coining of “cyberspace”. And it does a pretty good job of portraying the way the Internet felt when I first encountered it in the 1990’s. But I think the most noteworthy thing about the period we’re in right now is that this boundary is being erased. In another ten or twenty years, the metaphor of “the Internet” as a separate space may not even make sense to us anymore.

This is a theme that Charlie Stross has written a lot about, and he lays out some of the important themes in this 2009 speech. He notes that the spread of high-spead Internet connections, along with devices like the iPhone, is effacing the line between the Internet and the Real World. Looking forward to 2030, he says:

Welcome to a world where the internet has turned inside-out; instead of being something you visit inside a box with a coloured screen, it’s draped all over the landscape around you, invisible until you put on a pair of glasses or pick up your always-on mobile phone. A phone which is to today’s iPhone as a modern laptop is to an original Apple II; a device which always knows where you are, where your possessions are, and without which you are — literally — lost and forgetful.

Now, one can be excited or terrified about this vision, or some combination of both. But what’s significant about it is that it makes absolutely no sense to ask whether the Internet is important for real world politics in this context. The Internet is the world is the Internet.

To step back into the present: obviously we don’t yet live in a world of always-on augmented reality. But things like Twitter are a step in that direction. There is something fundamentally different about Twitter–where you can post updates and communicate with people from anywhere, as something integrated into everyday life–compared to the way the Internet was when I was a kid, when “going online” meant going down into the basement and getting lost in the screen. Newsgroups and listservs and BBS systems and the like really did feel like separate “spaces”, and so the metaphor of the internet as a place made sense. That’s why that Chappelle’s show sketch “If the Internet was a Real Place” is funny.

The whole misbegotten debate about Internet-versus-real activism strikes me as a consequence of the inevitable generational lag in our intellectual life. The people who are now in a position to dominate the conversation are the ones who were the first to grow up with the Internet–but it was the old Internet-as-a-place. I suspect that as the generations following mine assert their own approach to these questions, they will look at these distinctions very differently.

The Future of Music

May 8th, 2010  |  Published in Art and Literature, Everyday life, Political Economy

Recently The Atlantic published a piece about how “a generation of file-sharers is ruining the future of entertainment“. The piece is pretty silly, since it conflates “the future of entertainment” with “the profitability of the major entertainment corporations”, and in particular the record industry. Marc Weidenbaum has a nice explanation of how absurd that is. But even if you believe that the profitability of these companies is somehow necessary for us to have culture, the concern for their health seems to me wildly disingenuous and misplaced. Their troubles are not a function of “freeloaders” or the evils of the Internet. They are a result of greed and an unwillingness to part with an obsolete business model–an unwillingness that has been encouraged and abetted by the state and its approach to intellectual property law.

Here’s my solution for the record companies. All they need to do is offer a service that provides:

  • Unlimited downloads of a huge selection of music from both recent years and past decades…
  • In a high-quality format…
  • With absolutely no copy protection or other Digital Rights Management…
  • For no more than $5 per month.

Why do I think this might be a success? Because it already exists. The SoulSeek network is a file-sharing service that contains a huge selection–at least for the kinds of music I tend to like. And though it’s free to use, for a $5 donation you get a month of “privileges”, which essentially put you at the front of the line when downloading from other users, which makes the whole experience much faster.

I’ve given a lot of money to SoulSeek over the past couple of years–nearly $5 a month, as it turns out. And I would have happily given that money to a similar service that gave full legal access to copyrighted downloads, and passed some of that money on to the artists. But it doesn’t exist, because the record companies still believe they can force us to pay for $12 CDs and $1 iTunes song downloads. They don’t cling to that model because it’s the only one possible, but because they’re too greedy and short-sighted to try anything else.

Of course, the record companies and their apologists would immediately claim that the model I’ve described isn’t economically viable, and they could never make enough money from it to do all the good work they supposedly do to find and develop young artists. But even at $5 a month, there’s a lot of money to be made here. If unlimited downloads at a monthly rate caught on, it could come to be something like cable TV that a large percentage of households pay for as a matter of routine. I don’t think this is all that implausible: people like music almost as much as they like TV, and what I’m proposing would be an order of magnitude cheaper than cable.

According to the cable providers’ trade assocation, there are 62.1 million basic cable subscriptions in the United States. This number of online music subscriptions, at $5 per month, would bring in around $3.7 billion of revenue. In 2005, total revenue from the sale of recorded music in the U.S. was about $4.8 billion. When you consider how much cheaper digital distribution is than manufacturing and shipping physical media, the unlimited-downloads model looks pretty competitive with traditional sales.

Now, maybe this model wouldn’t catch on in the way I’ve suggested. But if people continue to prefer buying their music a la carte, there’s no reason a subscription-based service couldn’t coexist with iTunes style pay-per-download. Unfortunately, there’s not a whole lot of incentive for the big copyright cartels to move toward the system I’ve sketched out here, because the Obama administration seems intent on using the repressive power of the state to force people into consuming media in the way the media conglomerates would prefer. Atrocities like the ACTA treaty are moving us toward a world of pervasive surveillance in which our cultural wealth is kept under lock and key for the benefit of a few wealthy copyright-holders.

In light of all this, the correct response to anyone who decries the moral perfidy of file-sharers is derisive laughter. The media companies have chosen to transition into a form of rentier capitalism that requires them to wage war on their own consumers. In that environment, it can hardly be surprising that the consumers fight back.

In Defense of Anonymity

March 1st, 2010  |  Published in Everyday life

Photo by Tony Pierce.

F. Scott Fitzgerald’s odd declaration that “there are no second acts in American lives” is, as has been said, among the most inaccurate statements ever made about the country. America is defined by its second lives, and almost nothing–not terrorism, not treason, not even urinating on an underage girl on video–can prevent the eternal recurrence of our public figures.

So I was not surprised by the return of Jaron Lanier. I first encountered this asshat in Thomas Frank’s One Market Under God, where he appeared as an example of the bankrupt culture of the dot-com bubble: a guy who was celebrated (by bell hooks, even!) for his dreadlocks and his general counter-cultural persona, but whose contribution to public debate consisted of defending the neoliberal conventional wisdom and sticking up for the Microsoft monopoly.

Now Lanier is back to inform us that the Internet is ruining everything. His schtick is mashup of three old chestnuts: that post-modern, sample-based forms of creation are inferior to “real” creativity, that rampant piracy is making it impossible for creative workers to make a decent living, and that online anonymity trashes civil discourse.

The first argument is so stupid–and so redolent of 1950’s parents complaining about rock and roll, or baby boomers trashing rap–that I think it’s basically self-refuting for anyone under 40. The second strikes me as a potential but not a real problem, since the Internet has notably failed to make a significant dent in the volume of artistic production. To the extent that online “free culture” destabilizes the careers of people I actually respect–Charles Stross, for example–I’m sympathetic, but still not inclined to join the hand-loom weaver defense committee. We need to find other ways of supporting artistic work, rather than clinging to a repressive and inefficient regime of artificial scarcity.

The denunciation of anonymity is the most compelling, however, and you hear it repeated all the time, by people of all ages and political temperaments. It is undoubtedly the case that anonymous trolls have a remarkable ability to disrupt rational discourse; anyone who has ever had their forum invaded by birthers or accidentally looked at the comments on a YouTube video can attest to this.

However, I do not think we should be so quick to give up the power and freedom that comes from anonymity. Opponents of anonymity will emphasize that when your statements are attached to your real name, you are forced to take responsibility for them. This is true, but it has some ominous implications. What makes the question of identity on the Internet so fraught is that when you reveal your identity you reveal it not just to your immediate interlocutors, but to friends, family, potential employers, the state, and anyone who cares to google your name. This point is often lost, I think, on critics of anonymity who are already public figures (often with secure academic or journalistic sinecures of one kind of another) and therefore have little to fear (and much to gain) from associating their comments with their identity.

Moreover, a lack of anonymity is no guarantee of civility. For a case study, see this thread, in which the pundit Jim Sleeper behaves like a colossal ass while denouncing the anonymity of some acerbic but basically civil critics in one of his comment threads.

Critics of anonymity can, of course, propose the alternative of privacy. That is, even if your interventions in public discourse are associated with your name, employers or governments or schools or credit card companies  can be prohibited from using those words against you. Yet I doubt that this is really the right place to make our stand. There is, first of all, the practical difficulty of preventing state and private entities–which now gather almost unfathomable amounts of data on the population–from making use of information in ways that benefit them. But even more than this, what does the concept of privacy even mean today? We still deploy the concept of a “private sphere” as though it denotes some clearly defined part of our lives that is distinct from the “public” part, that is accessible to everyone. But this division depends for its meaning on a particular social structure, in which we have one set of “public” relations–in politics, and the labor market–and another set of “private” social relationships centered on the family. I’ll quote what I said in an exchange with Rob Horning, who invoked Herbert Marcuse as a critic avant la lettre of social networking, consumerism, and the concomitant attenuation of privacy:

What does it mean for the individual to be “thrown back on himself alone”, to be able to “think and question and find”? The full explication is too long to draw out here, but I’ve concluded that the sort of non-capitalist individuality that Marcuse is defending only makes sense within a society that still makes a strong distinction between “public” and “private” spheres. It’s important to note that the private, here, is based on the bourgeois nuclear family, not the individual—Marcuse’s debt to Freud and left-Freudians like Wilhelm Reich is really important. This kind public-private distinction is itself rooted in a contrast between mid-20th century mass consumer capitalism and an earlier form of capitalism in which the public-private distinction still had more salience (at least for the privileged classes). Such a distinction is historically specific to Marcuse’s time—for people my age (and I think I’m about the same age as you, Rob), our frame of reference is just a previous stage of mass culture. These days that’s basically true for anyone under 65. Thus my suspicion that rejecting social networking, the Internet, etc. is really just a nostalgia for an earlier kind of consumer culture. So the recourse to individuality as an alternative ends up sounding like a kind of absurd narcissism when you counsel us to “sit quietly in a room, as Pascal prescribed”. In a society that is already totally atomized, rejecting mass culture means being totally anti-social, which is not what Marcuse was recommending. I’m not sure anymore that defending “privacy” as such is a useful place to make our stand. Privacy from whom, and for whom? Again, without the bourgeois notion of the “private sphere” it seems arbitrary to make this distinction. Which isn’t to say that we shouldn’t be concerned about having every aspect of the self be an open book for Capital. But I wonder if in the 21st century surveillance society, the relevant issue isn’t the right to *anonymity* rather than the right to privacy.

What does it mean to invoke anonymity as an alternative to privacy? One advantage of this move is that it allows us to leave behind the premise that each of us has a single, authentic self. The whole critique of anonymity rests upon the assumption that we can choose one of two ways of presenting ourselves: as our one “true” self, associated with our legal name, consistent and continuing over time, or as an anonymous and irresponsible avatar of the moment, who can drive-by troll a comment thread and then disappear into the night. Neither of these poles represent the way most people actually live, however. Even those of us who never post anonymously online have multiple selves: work self, school self, family self, bar self, fantasy baseball self, or whatever.

The Internet-age culture of pseudonymous handles only codifies this, and in some ways actually makes it more accountable. In the bizarre Jim Sleeper exchange I linked above, a couple of Sleeper’s antagonists make an important point: just because their comment-thread handles aren’t their real names doesn’t mean they don’t care about the reputations of those handles. Anonymity isn’t a way of avoiding accountability so much as a way of dividing it, acknowledging the reality that we are legion, we contain multitudes.

To the extent that truly one-off, unaccountable, drive-by anonymity is a problem, I think the solution is not to demand that people “take off their masks” but to devise new ways of managing anonymous and pseudonymous communities. That might mean a benevolent monarchy under a blogger like Ta-Nehisi Coates, who presides over the finest commenter community on the political internet. Or it might mean technical innovations like Slashdot Moderation, a breakthrough that I wish was much more widely applied on non-geek sites. But anonymity, I think, is here to stay. And that’s as it should be.