Robot Redux

August 18th, 2015  |  Published in anti-Star Trek, Political Economy, Politics, Time, Work

It never fails that when I get around to writing something, I’m immediately inundated by directly related news, making me think that I should have just waited a few days. The moment I commit bits to web servers about the robot future, I see the following things.

First, the blockbuster New York Times story about Amazon and its corporate culture. The brutality of life among the company’s low-wage warehouse employees was already well covered, but the experience of the white collar Amazonian was less well known. The office staff, it seems, experiences a more psychological form of brutality. I couldn’t have asked for a better demonstration of my point that “the truly dystopian prospect is that the worker herself is treated as if she were a machine, rather than being replaced by one”. To wit:

Company veterans often say the genius of Amazon is the way it drives them to drive themselves. “If you’re a good Amazonian, you become an Amabot,” said one employee, using a term that means you have become at one with the system.

On to number two! Lydia DePillis of the Washington Post reacts to efforts to raise the minimum wage in exactly the way I mentioned in my post: by raising the threat of automation. She notes various advances in technology, while also observing that in recent times “the industry as a whole has largely been resistant to cuts in labor . . . the average number of employees at fast-food restaurants declined by fewer than two people over the past decade”. But, she warns, that could all change if the minimum wage is raised to $15.

Liberal economist (and one-time adviser to the Vice President) Jared Bernstein responds here. He makes, in a slightly different way, the same point I did: “one implication of this argument is that we should make sure to keep wages low enough so employers won’t want to bother swapping out workers for machines . . . a great way to whack productivity growth.” (Not to mention, a great way to make life miserable for the workers in question.) He then goes on to argue that higher wages won’t really lead to decreased employment anyway, which sort of undercuts the point. But oh well.

Finally, we have the Economist weighing in. This little squib on “Automation angst” manages to combine all the bourgeois arguments into one, in a single paragraph:

[Economist David] Autor argues that many jobs still require a mixture of skills, flexibility and judgment; they draw upon “tacit” knowledge that is a very long way from being codified or performed by robots. Moreover, automation is likely to be circumscribed, he argues, as politicians fret about wider social consequences. Most important of all, even if they do destroy as many jobs as pessimists imagine, many other as yet unimagined ones that cannot be done by robots are likely to be created.

So, to summarize. The robots won’t take your job, because they can’t. Or, actually, the robots can take your job but they won’t, because we will make a political decision to disallow it. Or no, never mind, the robots will take your job, but it’s fine because we will create lots of other new jobs for you.

This summarizes the popular approach to this problem well, from a variety of vantage points that all miss the main point. Namely, that if it is possible to reduce the need for human labor, the question becomes: who benefits from that. The owners, of the robots, or the rest of the working masses?

Egyptian Lingerie and the Robot Future

August 6th, 2015  |  Published in anti-Star Trek, Feminism, Political Economy, Politics, Work  |  1 Comment

The current issue of the New Yorker has a story about the odd phenomenon of Chinese lingerie merchants in Egypt. These immigrant entrepreneurs are apparently ubiquitous throughout the poor, conservative districts of upper Egypt, where they dispense sexy garments to the region’s pious Muslim women. The cultural and geopolitical details of the story are interesting for a number of reasons, but I was struck in particular by a resonance with some debates that have recently flared up again about labor and automation, for reasons I’ll get back to below.

“Robots will take all our jobs” is a hardy perennial of popular political economy. Typical of the latest crop is Derek Thompson of the Atlantic, who wrote an article (in which he quotes me), speculating about a “World Without Work” in the wake of mass adoption of robotization and computerization. Paul Mason gives a more leftist and political rendition of similar themes.

As I note in my recent Jacobin editorial, this kind of thing is not new, and is in fact an anxiety that recurs throughout the history of capitalism. Two decades ago, we had the likes of Jeremy Rifkin and Stanley Aronowitz musing about the “end of work” and the “jobless future”.

And these repeating waves of robo-futurism call into existence the same repeated insistence that robots are not, in fact, taking all the jobs. Doug Henwood was on this beat twenty years ago and remains on it today. Matt Yglesias, likewise, calls fear of automation a “myth”.

One of the specific things that people like Henwood and Yglesias always cite is the productivity statistics. If we were seeing a wave of unprecedented automation, then we should be seeing rapid rises in measured labor productivity—that is, the amount of output that can be produced per hour of human labor. Instead, however, what we’ve seen is historically low productivity growth, compared to what happened in the middle and late 20th Century.

All of which leads commentators like Yglesias and Tyler Cowen to fret that the robots aren’t coming fast enough. Typical of most writers on this subject, Yglesias just worries vaguely that increases in productivity won’t happen for some unspecified reason.

I’ve argued a number of times for an explanation that connects the question of automation and productivity growth directly to wages and the general condition of labor. The basic idea is very simple. From the perspective of the boss, replacing a worker with a machine will be more appealing to the degree that the machine is:

  • Cheaper than the human worker
  • More convenient and easier to control than the human worker

This implies that if workers win higher wages and more control over their working conditions, their jobs are more likely to be automated. Indeed, arguments like this frequently crop up among critics of things like the Fight for 15 campaign, which demands higher wages for fast food workers and other low wage employees. Prototypes for automatic burger-making machines are cited in order to warn workers that their jobs are at risk of being automated away.

I regard such warnings not as arguments against higher wages, but arguments for them. Workers, in the course of fighting for their interests, drive the dialectic that forces capitalists to find less labor-intensive ways of producing. The next political task, then, is to make sure that the benefits of such innovation accrue to the masses, and not to a small class of robot owners.

What I fear most is not that all of our labor will be replaced with machines. Rather, like Matt Yglesias, I worry that it won’t—but for a slightly different reason. Again, bosses prefer workers to machines when they are cheaper and easier to control. Hence the truly dystopian prospect is that the worker herself is treated as if she were a machine, rather than being replaced by one.

Which brings us back, finally, to the Chinese lingerie merchants. The article’s author, Peter Hessler, speaks to one such merchant, and asks him to comment on the biggest problem facing Egypt. To his surprise, his subject, Lin Xianfei, has a quick answer: gender inequality.

But the point turns out not to be that Lin is some sort of secret passionate feminist. Rather, his perspective turns on the exigencies of capital accumulation. For it turns out that while one kind of patriarchy is an impediment to business, another kind can be quite valuable to the shrewd businessman.

The problem, from Lin’s perspective, is that Egyptian women in his region don’t work in wage labor at all, or if they do they only do so for short periods of time, before marrying and retreating into the home. Even worse, local norms about proper female behavior preclude taking women out of their homes to live on site in massive dormitories, as might be done in China. Thus it becomes unfeasible to run factories on 24-hour production cycles.

Hiring men, meanwhile, is out of the question—another man, Xu Xin, tells Hessler that Egyptian men are too lazy and undisciplined for manufacturing work. Hessler goes on to note that “at the start of the economic boom in China, bosses hired young women because they could be paid less and controlled more easily than men”.

He proceeds to comment that female Chinese workers turned out to be “more motivated”, as though he is identifying something distinct from their weaker power position relative to men. But it is really the same thing. “More motivated”, here, refers to the motivation to work hard for the boss, for someone else’s profits and someone else’s riches. To behave, in other words, like obedient machines. The Chinese capitalist objects to the patriarchal structure of rural Egyptian society not because it is patriarchy, then, but because it is a form of patriarchy that is inconvenient to capital accumulation.

And sure enough, faced with recalcitrant humans, the textile magnates of Egypt turn to the same solution that the Chinese electronics firm Foxconn adopted in the wake of worker uprisings there. Wang Weiqiang echoes the other industrialists’ complaints about Egyptian labor: the men are lazy, the women “will work only during the daytime”. As a result, “he intends to introduce greater mechanization in hopes of maximizing the short workday”.

Greater mechanization and the maximization of a short work day might seem tragic to the capitalist, but it summarizes the short term goal of the post-work socialist left. Ornery, demanding workers work to drive technological developments that further this goal. And the socialist-feminist rendition of this project insists that we can prevent workers from being treated as machines not by shielding them with patriarchal and paternalistic morals, but rather by insisting that men and women alike can recognize their paid and unpaid labor in order to better refuse it.

Time Bubbles and Tech Bubbles

March 18th, 2015  |  Published in anti-Star Trek, Shameless self-promotion  |  1 Comment

The Time Bubble

The new issue of Jacobin is out. It’s about technology, a longstanding preoccupation of mine, and I have the lead editorial. Check it out, along with all the other great stuff in the issue.

I also wrote something for the newest issue of the New Inquiry, which is themed around “futures”. My essay is here. In some ways it functions as a companion piece to my editorial, although it’s generally loopier and weirder. It was retitled from my editor’s original suggestion, “The Time Bubble”, following the Fantastic Four storyline I reference in the text.

The above is an image from that storyline, showing the FF penetrating said bubble on their “time sled”. Which is named Rosebud II. I loved this series of comics when I first read it as a 10 year old, and I still have fond feelings about it. Walt Simonson was great on that run, which he both wrote and drew. He has a wonderfully angular and abstract art style, and he’s a witty writer with a good science fiction mind.

So I’m glad I got to build an essay about Marxist political economy around this story. Not that I’m the first person on the Internet to build an elaborate and vaguely ridiculous theory around these comics. For a far more ambitious and absurd attempt, you have to check out this site. The author argues that the 1961-1989 run of the Fantastic Four actually constitutes the “Great American Novel”, an unmatched examination and synthesis of all the big questions that confronted American society during the cold war.

The site’s coverage of the time bubble story can be found here. The author makes a bunch of metafictional arguments about the relationship between the stories and the upheavel in Marvel’s editorial direction at the time, which was of course totally invisible to me when I was 10. The time bubble, he argues, represents the end of continuity and permanent change in the Marvel universe. It is about “all powerful beings”—i.e., editors—“who prevent the world from moving into the future” by dictating that writers cannot make permanent changes to the characters and worlds that they are writing.

Later on, there’s another funny series of comics riffing on Marvel’s internal bureaucracy, with a dimension of infinite faceless desk jockeys standing in for a directionless editorial team. It’s all hilarious and wonderful. But really, just go read the comics.

The Tragedy of the Commons in the Rentier Mind

February 12th, 2015  |  Published in anti-Star Trek  |  2 Comments

Complementing my last post, here’s a story about the twisted ideology that now surrounds intellectual property, where IP is considered not just as utilitarian necessity, but as some kind of inherent natural right. In the most absurd form, it is seen as a moral responsibility for creators to zealously defend any IP they can get their hands on, and maximize whatever amount of money they can squeeze out of it.

This article is about our trendy hot sauce of the moment, Sriracha. Specifically, it is about the fact that while the hot sauce in question is strongly associated with a particular product made by Huy Fong foods, the Sriracha name itself is not trademarked. As a result, everyone from your local twee sauce artisan to Heinz and Tabasco is now jumping in with their own Sriracha.

None of this seems to much bother Huy Fong’s founder, David Tran. But boy does it bother all the people that look at this scenario and see a bunch of juicy lawsuits!

The author of the LA Times article calls it a “glaring omission” not to trademark the word.

“In my mind, it’s a major misstep,” says the president of a food marketing consultancy.

Even his competitors are baffled. “We spend enormous time protecting the word ‘Tabasco’ so that we don’t have exactly this problem,” says the CEO of a rival hot sauce company that’s now going into the Sriracha market. “Why Mr. Tran did not do that, I don’t know.”

An IP lawyer laments: “The ship has probably sailed on this, which is unfortunate because they’ve clearly added something to American cuisine that wasn’t there before.”

That David Tran has added something to American cuisine is hard to dispute. But Tran also has a successful, growing business that has most likely made him very rich. One which he has said, on numerous occasions, he deliberately does not scale up as much as he could, in order to maintain the quality of his product and control the sources of his peppers.

So for whom is it so “unfortunate” that he doesn’t spend his life in constant litigation against anyone who dares make a Thai-style hot sauce and name it after a city in Thailand? Tran himself gives the answer. “We have lawyers come and say ‘I can represent you and sue’ and I say ‘No. Let them do it.'”

Intellectual Property and Pseudo-Innovation

February 10th, 2015  |  Published in anti-Star Trek, Political Economy  |  1 Comment

The most common justification for intellectual property protection is that it provides an incentive for future creation or innovation. There are many cases where this rationale is highly implausible, as with copyrights that extend long after the death of the original author. But even where IP does spur innovation, the question arises: innovation of what kind?

I’ve written before about things like patent and copyright trolling, where the IP regime incentivizes innovations that have no value at all, because they amount to figuring out ways to leverage the law in order to make money without doing any work or producing anything. But there’s another category of what might be called “pseudo-innovation.” This involves genuine creativity and cleverness, and the end result is something with real social utility. But the creativity and cleverness involved pertains only to circumventing intellectual property restrictions, without which it would be possible to produce a better output in a simpler way. A couple of examples of this have recently come to mind.

The first is the movie Selma, Ava DuVernay’s dramatization of the 1965 Selma to Montgomery voting rights marches led by Martin Luther King, Jr. Like most dramatizations of historical events, the movie takes liberties with the historical record in order to compress events into a coherent and compelling narrative. But one of these liberties is particularly unusual: in scenes recreating actual King speeches, none of the words we hear from actor David Oyelowo’s mouth are King’s; rather they are broad paraphrases of the original words.

As it turns out, this was not a decision made for any artistic reason, but for a legal one: King’s speeches are still the property of his descendants, who make large amounts of money by zealously guarding their copyrights. DuVernay was apparently barred from using the speeches because the film rights to King had already been licensed to Stephen Spielberg; meanwhile, the King family has had no problem lending his memory out to commercials for luxury cars and phone companies. DuVernay does an elegant job of giving the content and the feel of King’s oratory without using his actual words, and one could perhaps even argue that some unique value arises from this technique. But for the most part it’s pseudo-innovation, a second best solution mandated by copyright.

Another example comes from a very different field, computer hardware manufacturing. Here we turn to the early 1980’s and the development of the “PC clone.” Today, the personal computer is a generic technology—the machines that run Windows or Linux or other operating systems can be bought from many manufacturers or even, like the machine I’m using to write this post, assembled by the end user from individually sourced components. But in 1981, the PC was the IBM PC, and if you wanted to run PC software you needed to buy a machine from IBM..

Soon after the PC was introduced, rival companies began trying to produce cheaper knockoffs of the IBM product–the efforts of one leader, Compaq, are dramatized in the AMC series “Halt and Catch Fire”. Building the machines themselves was trivial, because the necessary hardware was all publicly available and didn’t require any propriety IBM technology. But problems arose in the attempt to make them truly “IBM-compatible”—that is, able to run all the same software that you could run on an IBM. This required copying the BIOS (Basic Input/Output System), a bit of software built into the PC that programs use to interface with the hardware.

That BIOS was proprietary to IBM. So in order to copy it, Compaq was forced into a bizarre development system described by Compaq founder Rod Canlon as follows:

What our lawyers told us was that, not only can you not use it [the copyrighted code] anybody that’s even looked at it—glanced at it—could taint the whole project. (…) We had two software people. One guy read the code and generated the functional specifications. So, it was like, reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then, once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function.

Through this convoluted process, Compaq managed to make a knockoff BIOS within 9 months. Just as Ava DuVernay came up with paraphrases of King, they had essentially paraphrased the IBM BIOS. And the result was something genuinely useful: a cheaper version of the IBM PC, which expanded access to computing. But the truly inventive and interesting things Compaq came up with—the things that make the story worth fictionalizing on TV—are pure pseudo-innovation.

Looked at this way, the world of IP pseudo-innovation looks kind of like high finance. In both cases, you have people making money and even having fun figuring out the best ways to game and counter-game the system, but in none of the complicated trading algorithms or software development strategies add anything to social wealth.

Critique of Any Reason

February 6th, 2015  |  Published in Uncategorized

Further to my last post. Some years ago I went to a gallery in Queens, and as a byproduct ended up on a mailing list that periodically advertises at me with various art objects. As it turns out, you can literally shove the enlightenment up your ass!

Anal Scroll is a limited edition series of 5 custom crafted butt plugs made of pH neutral cast silicone, dimensions: 5.7 x 1.2 x 2.3 inches (14.5 x 3 x 5.8cm). Inside each plug is a scrollable text printed on 7/8 x 144 inch (2.5 cm x 360 cm) fabric ribbon of, “Of Space” from Critique of Pure Reason by the German philosopher Immanuel Kant. Anal Scroll was infamously performed in the eponymously titled work by the artist’s alter-ego Renny Kodgers at Newcastle University, Australia in 2014 as part of The Grotto Project presents: Art and the Expanded Cover Version — curated by Sean Lowry PhD. Accompanying each Anal Scroll is: Instructions for Use as well as Disclaimer: “Entry At Your Own Risk”. Price $300 with one artist proof $400 for Plato’s Cave performance; The Groker, Exhibition: January 24, 2015 – February 21, 2015.

Beginning to See the Light

February 6th, 2015  |  Published in Socialism

So I found myself (h/t Gavin Mueller) perusing Cyril Smith on Hegel, Marx, and the enlightenment, and by way of that Marx’s comments on religion. (For contemporary relevance, see here and here.) Smith quotes an 1842 letter (Marx was 24 at this point; what have I been doing with my life?):

I requested further that religion should be criticised in the framework of criticism of political conditions rather than that political conditions should be criticised in the framework of religion, since this is more in accord with the nature of a newspaper and the educational level of the reading public; for religion itself is without content, it owes its being not to heaven but to the earth, and with the abolition of distorted reality, of which it is the theory, it will collapse of itself. Finally, I desired that, if there is to be talk of philosophy, there should be less trifling with the label ‘atheism’ (which reminds one of children, assuring everyone who is ready to listen, that they are not afraid of the bogy man), and that instead the content of philosophy should be brought to the people.

This applies, of course, to contemporary anti-religious scolds of the Sam Harris/Bill Maher/Richard Dawkins variety. But the term “religion” could, in many contexts, be replaced with “science” or “reason” today. That is, the authority of science or reason is used as a cudgel against those who might have good—though perhaps misguided—bases for questioning whether the scientific process is distorted by the imperatives of capital accumulation. And so too against those who point out that the right to argue from disinterested reason is not one that is evenly or universally acknowledged. (Repeatedly these days I find myself thinking of this as a model for engaging wrong ideas in the spirit of Lenin’s “patiently explain” rather than a spirit of arrogant derision.)

And Smith points out that reason, and the enlightenment, were for Hegel and many others fundamentally religious concepts:

The atheists, and especially the Enlightenment materialists, who easily settled this entire discussion with the word ‘superstition’, left no more space for subjectivity than their opponents: we are just matter in motion, governed by the laws of Nature, they said. Spinoza had no trouble identifying the laws of nature with God’s will, and Hegel shows that Enlightenment and superstition in the end agree with each other. ‘Marxism’, coming up with ‘material laws of history’, locked the gates still more securely.

Needless to say I endorse the scare-quoting of “Marxism” in this context. The criticism of ideology generally proceeds more constructively by analyzing the conditions of that ideology’s possibility, rather than simply confronting it with counter-ideology. And my favored reading of Marx, from “On the Jewish Question” on outwards, is that the enlightenment ideal of disinterested reason is best posited as the objective of communists, an ideal that cannot be realized in capitalism, rather than an existing regime to be defended against the forces of irrationalism.

Wisconsin Ideas

February 5th, 2015  |  Published in Politics

A few years back, in pursuit of the lately relevant notion that all politics are, in some sense, identity politics, I wrote a bit about the role of regional cultures as a basis for left identities. In particular, I talked about the labor protests in Wisconsin and their relationship to the “Wisconsin idea”:

The historian Christopher Phelps argues that the protests drew strength and legitimacy from a particular set of shared norms unique to Wisconsin: the “Wisconsin idea,” a left-populist notion that both government and economy should be accountable to the common man. The Idea goes back to the early twentieth century politician Robert La Follette; it is taught in Wisconsin schools and is often invoked to describe the mission of the state University system. There is nothing inherently exclusionary or chauvinist about the Wisconsin Idea; its purpose is to provide a big tent in which all Wisconsinites can define themselves as part of an imagined community with shared progressive values.

The University of Wisconsin dedicates a whole section of its website to expounding upon the Idea. Scott Walker apparently understands the significance of all this, since he recently tried to delete references to the Wisconsin Idea from the university’s mission statement, replacing it with some blather about “meeting the state’s workforce needs.”

This move met with substantial backlash, although it remains to be seen if the symbolism of the Wisconsin Idea can be effectively mobilized to push back Walker’s substantive attempt to further dismantle the state’s public sector.

Beyond the Welfare State

December 10th, 2014  |  Published in Political Economy, Politics, Socialism, Work  |  6 Comments

Jacobin has published Seth Ackerman’s translation of an interesting interview with French sociologist Daniel Zamora, discussing his recent book about Michel Foucault’s affinities with neoliberalism. Zamora rightly points out that the “image of Foucault as being in total opposition to neoliberalism at the end of his life” is a very strained reading of a thinker whose relationship to the crisis of the 1970’s welfare state is at the very least much more ambiguous than that.

At the same time, Zamora’s argument demonstrates the limitations imposed by the displacement of “capitalism” by “neoliberalism” as a central category of left analysis. For his tacit premise seems to be that, if it can be shown that Foucault showed an “indulgence” toward neoliberalism, we must therefore put down his influence as a reactionary one. But what Foucault’s curious intersection with the project of the neoliberal right actually exemplifies, I would argue, is an ambiguity at the heart of the crisis of the 1970’s which gave rise to the neoliberal project. That he can be picked up by the right as easily as the left says much about the environment that produced him. Meanwhile, Zamora’s own reaction says something important about a distinction within the social democratic left that is worth spending some time on, which I’ll return to below.

Zamora makes much of the neoliberal move away from the attempt to reduce inequality, in the direction of targeted efforts to alleviate poverty and provide a minimum standard of living. (In a juicy bit bound to delight those of us immersed in the wonky details of empirical measures of inequality, he even quotes one of Foucault’s right-wing contemporaries positing that “the distinction between absolute poverty and relative poverty is in fact the distinction between capitalism and socialism”.) But in doing so, he elides the force of the Foucauldian critique of the welfare state. It is true that the move away from universal social provision and toward targeted aid is a hallmark of social policy in the era of welfare state retrenchment. But this is not the main point of Foucault’s argument, even by Zamora’s own telling.

Foucault, he argues, “was highly attracted to economic liberalism” because “he saw in it the possibility of a form of governmentality that was much less normative and authoritarian than the socialist and communist left.” It is possible to see this as nothing more than either reaction or naïveté, as Zamora seems to when he warns of Foucault’s mistake in putting “the mechanisms of social assistance and social insurance . . . on the same plane as the prison, the barracks, or the school.” But it’s possible to extract a different lesson about the nature of the system that Foucault was analyzing.

At the heart of Zamora’s own project, he says, is a disagreement with Geoffroy de Lagasnerie’s argument that Foucault represents “a desire to use neoliberalism to reinvent the left.” Rather, he argues “that he uses it as more than just a tool: he adopts the neoliberal view to critique the Left.”

Here we have the crux of the problem. For Zamora, the key political opposition is between “neoliberalism” and “the Left.” But neoliberalism is only a historically specific phase of capitalist class strategy, one which itself developed in the context of the particular form of welfare capitalism and class compromise that arose in the mid-20th Century. So if “the Left” is conceived primarily as a project against neoliberalism, its aims will be limited to the restoration of the pre-neoliberal order, which Zamora defines as “social security and the institutions of the working class.”

But the value of Foucault, and others like him, is in highlighting the limits of any such strategy. Postwar welfare capitalism was, to be sure, a substantive achievement of the working class and the socialist movement. And it represented an equlibrium—call it the Fordist compromise—in which workers shared in the benefits of rising productivity.

But it was also an inherently contradictory and self-subverting order. This was true both from the perspective of capital and of labor. For the capitalist, long periods of full employment and strong labor movements meant a profit squeeze and escalating political instability as workers lost their fear of unemployment and poverty. The Fordist compromise was no more satisfactory for workers, as the historian Jefferson Cowie documents in his writing on the 1970’s. What was called the “blue collar blues” represented the desire of workers for more than just higher paychecks: for more free time, for control over the labor process, for liberation from wage labor.

The welfare state institutions that arose in that context were marked by the same contradiction: they were at once sources of security and freedom, and instruments of social control. As Beatriz Preciado says, in a quote Zamora produces as evidence of the bad new libertarian left: “the welfare state is also the psychiatric hospital, the disability office, the prison, the patriarchal-colonial-heteronormative school.” One aspect of the welfare state made it dangerous to the employing class, while another chafed on the employed (and unemployed). Welfare capitalism has always been characterized by this tension between universalistic benefits tied to a universal notion of social citizenship, and carefully targeted systems of qualification and incentive designed to prop up specific social relations, from the workplace to the street to the home. This is a key insight of the school of comparative welfare state study that distinguishes the decommodifying from the stratifying elements of the welfare state.

One way to think of this is as the permeation of the contradictions of bourgeois democracy into the economic sphere. Just as capitalist democracies exist in an uneasy tension between the principles of “one person one vote” and “one dollar one vote”, so does the system of economic regulation simultaneously work to support the power of the working class and to control it.

In contrast, Zamora seems unwilling to countenance this two-sided quality to class compromises in capitalism. As he puts it, the choice is either “that social security is ultimately nothing more than a tool of social control by big capital” (a view held by unnamed persons on “the radical left”), or that the bourgeoisie “was totally hostile” to institutions that “were invented by the workers’ movement itself.”

Zamora appears to view social insurance as representing the creation of “social rights” that cushion workers from the vagaries of the market, while leaving the basic institutions of private property and wage labor in place. This is a non-Marxist form of social democracy with deep theoretical roots going back to Karl Polanyi and T.H. Marshall, and it was arguably the main way in which the European social democratic parties saw themselves in their heyday. This kind of social democracy is the protagonist in Shari Berman’s recent book on the history of European social democracy, in which the Polanyian pragmatists are pitted against Marxists who, in her view, ignored the exigencies of social reform altogether in favor of an apocalyptic insistence that the capitalist system would inevitably collapse and usher in revolution. The endpoint of this kind of Polanyian socialism is a welfare state that protects the working class from the workings of an unfettered market.

There is, however, another way to think about the welfare state from a Marxist perspective. It is possible to believe that fighting for a robust and universal welfare state is a necessary and desirable project, while at the same time believing that the socialist imagination cannot end there, because the task of humanizing capitalism generates its own contradictions. On this view, the system Foucault analyzed was a system that could not simply continue on in static equilibrium; it had to be either transcended in a socialist direction, or, as happened, dismantled in a project of capitalist retrenchment. From this perspective, the importance of figures like Foucault is not just as misleaders or budding reactionaries, but as indicators of social democracy’s limits, and of the inability of the mainstream left at the time to reckon with the crisis that its own victories had produced. By the same token, neoliberalism can be seen not just as a tool to smash the institutions of the working class, but also as a mystified and dishonest representation of the workers’ own frustrated desires for freedom and autonomy.

Zamora speaks of Foucault imagining “a neoliberalism that wouldn’t project its anthropological models on the individual, that would offer individuals greater autonomy vis-à-vis the state.” Other than the name, this does not sound much at all like the really existing neoliberal turn, which has only reconfigured the densely connected relationship between state and market rather than freeing the latter from the former. This vision of autonomy sounds more like the radical move beyond welfare capitalism, toward Wilde’s vision of socialist individualism. (Provided, that is, that we accord autonomy from bosses equal place with autonomy from the state.) Postmodernism as premature post-capitalism, as Moishe Postone once put it.

None of this is to say that the fight for universal social provision is unimportant; nor is it to dispute Zamora’s point that the fight for universal economic rights has tended, in recent times to be eclipsed by “a centering of the victim who is denied justice” as he quotes Isabelle Garo.

The point is only that it is worth thinking about what happens on the other side of such battles. Whether one finds it useful to think along these lines depends, ultimately, on what one sees as the horizon of left politics. Zamora speaks mournfully of the disappearance of exploitation and wealth inequality as touchstones of argument and organizing, and of the dismantling of systems of social insurance. Yet he himself seems unwilling to go beyond the creation and maintenance of humanized forms of exploitation, a perhaps more egalitarian (but not equal) distribution of wealth. He speaks favorably of Polanyi’s principle of “withdrawing the individual out of the laws of the market and thus reconfiguring relations of power between capital and labor”; meanwhile, André Gorz’s elevation of the “right to be lazy” is dismissed and equated with Thatcherism.

This Polanyian social democracy as a harmonious “reconfiguring” of the capital-labor relation is a far cry from the Marxist insistence on abolishing that relation altogether. But its inadequacy as either an inspiring utopia or a sustainable social order is the real lesson of the crisis that gave rise to neoliberalism. And while Foucault may not have come to all the right conclusions about addressing that crisis, he at least asked some of the right questions.

Gamer’s Revanche

September 3rd, 2014  |  Published in Art and Literature, Feminism, Games, Political Economy, Politics  |  7 Comments

There was a time when I might have called myself a “gamer.” That is, I’m someone who plays and thinks about video games, and views them as a rich cultural form full of potential, both as art and as sport.

Now, however, even people who usually ignore games have been introduced to the figure of the “gamer,” and he is something entirely different. The gamer is threatened by women who share his tastes, and calls them “fake geek girls”. The gamer reacts to Anita Sarkeesian’s criticism of sexist tropes in video games with a bombardment of violent threats against her and her family. The gamer attacks feminist game creator Zoe Quinn with misogynist abuse and baseless allegations of corruption in reaction to a nasty blog post by a bitter ex-boyfriend.

It is not news that video games are often hostile to women, both as an industry and as a fan culture. Nor is it new that there are excellent feminist critics pointing this out within the games press, like Leigh Alexander and Samantha Allen. But the latest debates over misogyny and games have boiled over with new intensity in discussions among game consumers and creators, and have also reached beyond these circles. The New Inquiry has rounded up a collection of links for those who need to get up to date.

Evidently not everyone with a deep interest in games is a bitter, reactionary young man who reacts with violent misogyny to even the hint of social justice. But that faction of “gamers” has demonstrated its outsize ability to police the boundaries of debate and to drive out consumers, creators, and critics who challenge them, with the consent of a silent majority. What, politically, does this specific right-wing demographic represent?

The culture of video games has long been a fairly insular one—as has, to a greater or lesser extent, the wider “geek culture” in which it has been embedded, encompassing phenomena like Dungeons and Dragons, science fiction and fantasy novels and movies, and comic books. All of these forms have long histories of politically subversive, socialist, and feminist experimentation. But in their best-funded and most widely consumed commercial forms, they have especially catered to certain kinds of socially awkward boys and men, providing them with alternatives to dominant standards of masculinity.

At the same time, however, they cultivated an alternative misogyny, based on resentment of other men and a desire to usurp their patriarchal dominance, rather than overturn patriarchy entirely. Hence the geek culture is a breeding ground for Nice Guys who see themselves as persecuted outcasts but are unable to get over their desire to control women.

It’s impossible to dispute anymore that gaming is a completely mainstream mass-culture phenomenon in purely economic terms: consumer spending on games now rivals or exceeds spending on music and movies. And yet these gamers cling to an identity as marginalized underdogs, even as they defend the game industry’s existing practices of sexism, racism, and class exploitation.

Part of this has to do with the lag between economic and cultural acceptance. Games may be mainstream as an industry, but they have not yet achieved cultural parity with other media and other art forms. So we still get great film critics writing bumbling rants about why video games can’t be art, and the New York Times expressing wonderment at the notion that competitive sports can be mediated by computers.

This is not unusual for any young medium; cinema and television faced similar lags. Eventually, people who grew up with games will be in positions of cultural authority, and the idea of games as an inferior or ephemeral medium will disappear.

The assimilation of games into the larger culture poses a problem for a reactionary segment of gamers, however. It means engaging with a society that, while it is still capitalist and patriarchal, still suffused with racism, has also been challenged for decades by those it has traditionally marginalized. Wider engagement inevitably changes the parameters of geek culture, as new voices and new ideas are incorporated. Some gamers would like it both ways: they want everyone to take their medium seriously, but they don’t want anyone to challenge their political assumptions or call into question the way games treat people who don’t look and think like them. They hate and fear a world where games are truly made by and for everyone; where women make up a majority of the gaming audience; where a trans woman dominates one of the world’s great eSports.

It’s important to call these people what they are: not just anti-social jerks and not only misogynists, but as Liz Ryerson says, overall the right wing of people involved in games. No surprise, then, that they resemble conservatives who resentfully bemoan the liberal bias of Hollywood or the condescension of elite college professors. This isn’t a problem with gamer culture. It’s a problem with our entire culture, and specifically with the attitudes and behavior of a rightist, predominantly white and male section of that culture.

Right wing gamers project an overweening sense of superiority and entitlement, while at the same time constructing an identity based on marginality and victimization. In this, though, they aren’t really that different from many revanchist movements in capitalist societies. They’re much like the Tea Party right, which laments the disappearance of the America it recognizes—that is, the America where straight white men are systematically advantaged. This is a basic element of the reactionary mind: a fundamental opposition to equality as such. So it is with gamers for whom, as Tim Colwill puts it, “the worst possible thing that can happen here is equality.” This group of angry gamers no longer “recognizes their country,” as it were, what with all these women and queers and leftists running around.

This is why it’s wrong to suggest, as Ian Williams does, that gamer culture’s fatal flaw is to be “tainted, root and branch, by its embrace of consumption as a way of life.” The idea that communities organized around shared cultural consumption are inherently reactionary is so broad as to be vacuous, and it could apply equally to movie buffs, sports fans, or Marxist theory aficionados. It’s possible for any politics, left or right, to devolve into mere consumption choices. But that is not the problem currently on display among gamers. Indeed, the danger arises from their choice not to just passively consume, and to lash out in defense of what they believe “true” gamer culture should be.

The attacks on people like Anita Sarkeesian should be understood as collective political acts, and the reactionaries who carry them out should be understood as ideological representatives of a specific political tendency among those who create and play games, rather than waved off with moralizing Adbusters-ish rhetoric as a bunch of consumer dupes. What threatens these gamers is the notion that gaming does not exist only to reassure their misogynist preconceptions, and that they may have those premises challenged. For not only is the culture of games broadening, but even the big-budget commercial segment that most caters to the backward fantasies of these young men is contracting relative to other parts of the industry, like indie, mobile, and web games.

As Leigh Alexander points out in her more sophisticated deconstruction of the “gamer” identity, “It’s hard for them to hear they don’t own anything, anymore, that they aren’t the world’s most special-est consumer demographic, that they have to share.” Change the words “consumer demographic” to “beneficiaries of the welfare state,” and you could be talking about Tea Partiers defending their Medicare while denouncing welfare queens.

So this is not just a story about gamers. And within the boundaries of the games world, it is also not merely a story about a “toxic culture” among game fans, but rather about an industry that is structurally and systematically reactionary, and cultivates the same values among a segment of its consumers. It’s not just 4chan mobs terrorizing writers and game designers, it’s a games business that pushes out workers who don’t fit its political assumptions and demographic stereotypes, by way of the same sexist practices that pervade the tech industry generally.

Famous game designers and studio owners won’t openly endorse the threats and terror of anonymous trolls, but those trolls are the shock troops that help keep the existing elite in power. The respectable men in suits will continue to hire the same boy’s club while making excuses for why women just don’t fit in as programmers or game designers or journalists. But the fascistic street-fighting tactics of the troll brigade work in the service of keeping everything in the industry the way it is.

Not only is it a useful tool for shutting down dissenting voices, the existence of these angry-nerd movements among fans and consumers does what fascistic movements always do: divide the working class by getting some of them to identity with the boss, which in this case serves to shore up the hyper-exploitative industry that Ian Williams has elsewhere described. The existence of a vociferously hostile vigilante squad shutting down dissenting speech makes it easier for studio heads to hire nothing but the same white men and then work them to death, for forum administrators to claim free speech and shrug at the hatred spewed on their pages, and for the industry to claim that they’re only satisfying “the audience” when they reproduce the same narrow and bigoted tropes year after year. Meanwhile the “good” geeks get distracted from the main event as they tussle with the trolls, like SHARPs and Nazi skinheads brawling at a basement show.

Which isn’t to say that death threats are a great look for the suits at the top of the game industry hierarchy. The trolls may sometimes get out of control, just as the Republican establishment sometimes loses control of the Tea Party, or the industrial capitalists sometimes lose control of the Nazi brownshirts. But that doesn’t mean they aren’t part of one dialectically inter-related political project. The Cossacks work for the Czar. The street fighters are there to police the boundaries of discourse, to forcibly drive out anyone who challenges the existing hierarchy—women, people of color, LGBT people, even the odd white man deemed to be too sympathetic to the women and the commies.

Gaming doesn’t have a problem; capitalism has a problem. Rather than seeing them simply as immoral assholes or deluded consumerists, we should take gaming’s advanced wing of hateful trolls seriously as representatives of the reactionary shock troops that will have to be defeated in order to build a more egalitarian society in the games industry or anywhere else.