Archive for September, 2009

Stagnation and the Steady State

September 29th, 2009  |  Published in Political Economy, Politics, Socialism

Lots of interesting stuff in the most recent New Left Review. Last night I went through Gopal Balakrishnan’s latest, in which he argues that the capitalist world system is not in for a return to rapid growth any time soon, but is instead headed toward the stagnant “stationary state” that characterized pre-industrial civilizations:

Note that this is subtly different from the “catastrophist” predictions which leftists have historically made, and which have a terrible track record. Balakrishnan is not arguing that capitalism is on the verge of collapsing and giving way to something else, because there is no oppositional movement powerful enough to bring about this outcome. He opens his essay with a quote from Gramsci, speaking about the potential for a “crisis that lasts many decades”; the passage evokes a more famous remark Gramsci’s that “the old is dying, and the new cannot yet be born”.

Balakrishnan gives four reasons for his prognosis: “demographic disproportion, ecological deterioration, politico-ideological de-legitimation and geo-political maladaptation”. These refer to:

  • The aging of Western societies, leading to a situation in which there are fewer productive workers supporting the retirement benefits of a growing elderly population. The growing cost of health care and the weak prospects for productivity growth in the service sector will lead to serious fiscal pressure on the state.
  • The effects of climate change and other man-made disasters. While a transition to a “green capitalism” may theoretically be possible, at present it seems as though neither the bourgeoisie nor the political elite have the will to see this through.
  • Neoliberalism has been discredited by the crisis, but the Keynesianism of the postwar era seems equally incapable of restoring growth, and instead can only prop the system up with a series of desperate bailouts and fiscal stimulus measures. Meanwhile, there remains no credible ideological challenge from the left. The consequence is escalating depoliticization and cynicism across the world polity.
  • Finally, no clear successor to the American hegemon presents itself: China’s economy is still too weak to shoulder the burden, Europe lacks the unified state capacity to do it, and no ad-hoc alliance of world powers seems capable of truly restructuring the system and moving it to a new stage.

The piece is speculative, without much in the way of empirical evidence, but it’s nonetheless useful. As to these four explanations, some strike me as more plausible than others.

The ageing of Western societies is a real phenomenon, of course, but I am skeptical that it will have quite the impact that Balakrishnan suggests. For one thing, there is no reason to suppose that the current retirement age will remain fixed–indeed, we have already seen some efforts to push it up. Since fewer jobs today require intensive manual labor, it is feasible for people to remain at work longer. Of course, increasing the retirement age would be a reversal of one of the great gains made by the working class in the twentieth century, and it would have highly inequitable consequences for those who do still work in manual occupations. But we are speaking here only of possible resolutions of the crisis, not desirable ones.

Balakrishnan is also too quick to accept at face value the argument that health expenditures will continue to accelerate rapidly while the service sector will experience no productivity gains, based on what appears to be a variant of Baumol’s cost disease theory.  The argument about health care betrays some ignorance about health care economics. Much of the cost of care in the United States is driven by the perverse structure of the market for health services, and the comparison of modern welfare states shows that the state is quite capable of restraining cost growth. As for the rest of the service sector, the issue of slow productivity growth is a real one, but I would not discount the possibility of realizing substantial efficiency gains. Think of the growth in self-checkout machines at grocery stores, which could do to grocery employment what ATM’s did for bank tellers. Or, even more dramatic, consider the innovations in online education which could completely upend the existing system of higher learning. I suspect, in fact, that increased productivity in services has been held back by the appallingly low wages at the bottom of the American labor market, which have disincentivized employers from economizing on labor.

As for ecology, I am unfortunately afraid that Balakrishnan is right. The way the climate change debate has unfolded in the United States and elsewhere certainly suggests that the capitalist class is incapable of putting their long-term interests ahead of short-term profits and ideological antipathy to state solutions such as carbon taxes or even cap-and-trade. (Although the recent defections from the chamber of commerce are perhaps hopeful signs.) At the same time, it’s still difficult to imagine that China will really consent to restrain their emissions in a way that the earlier industrializers never had to, all in order to ameliorate a problem that they themselves did not create.

While the consequences of climate change for humanity will be terrible, however, I do wonder whether they will really be as bad as all that for capital accumulation. On the one hand, ecological chaos would lead to a widespread destruction and devaluation of capital, allowing a new round of accumulation to proceed. On the other hand, there’ s no reason in principle that adapting to and cleaning up after climate change-induced mayhem can’t be highly profitable, even if its human consequences are terrible. The relevant maxim here, it seems to me, is that the reason capitalism never collapses is that there always turns out to be a way of resolving the crisis at the expense of the working class.

The issue of ideological drift and delegitimization of political institutions is, to me, the strongest of the arguments Balakrishnan musters–though oddly, it is the one he gives in the least detail. The bankruptcy of neoliberalism and the insufficiency of mainstream Keynesian solutions are plain enough.  As is the depoliticization and demobilization of the demos, the supposed mass base of bourgeois democracy. But what is even more ominous is the way in which bourgeois political institutions seem increasingly incapable of competently managing capitalism, even from a narrowly capitalist standpoint. Years of tax revolts and racist pandering from the right have lead to a situation in which it is always possible to appropriate new funds for new programs (at least if they take the form of giveaways to business or the rich), but never possible to raise the tax revenue to pay for them.  The end state of this trajectory is California, where Proposition 13, a 2/3 majority requirement for legislative tax increases, and a fanatical Republican minority have rendered the state an ungovernable wreck.

This situation appears intolerable, yet there remains no ideology on the horizon that seems capable of challenging it–certainly not Barack Obama’s technocratic center-rightism, which appears to be interested only in the restoration of postmodern finance capitalism’s status quo. And as Fredric Jameson points out, “the mass of people . . . do not themselves have to believe in any hegemonic ideology of the system, but only to be convinced of its permanence”.

The final of the four arguments, about geo-politics, is the most difficult to assess. As to the present moment, it certainly seems correct. American power is already over-extended, and the fiscal dilemmas outlined above may, eventually and hopefully, actually make cuts in defense spending thinkable in this country. I’m less pessimistic than Balakrishnan about the future of a united Europe, but it certainly doesn’t seem like any rapid further consolidation is likely in the near term. As for China, it may yet rise to take its place in a Sinocentric world system, as Giovanni Arrighi and other World Systems theorists predicted long ago. But in the meantime, it does seem as though we are in for a period of uncertainty and chaos.

It is never wise to discount capitalism’s ability to reinvent itself. We may yet see the launch of a new regime of accumulation in the coming years. Or, we may see another speculative bubble, putting off the crisis for another decade before culminating in an even worse crash. But if Balakrishnan is right, and we’re in for a slow, grinding stagnation, then the political order of the day will be the Gramscian “war of position”, as the left struggles to reorganize itself and raise the banners of “a better world in birth”.

But what is that world, and what will be inscribed on our banners? That is, what are the principles that would underpin an alternative to capitalism today? That, alas, is a topic for another day.

September 21, YDAU: Gaudeamus Igitur

September 21st, 2009  |  Published in Art and Literature

It’s been a couple of weeks  since I finished Infinite Jest, a bit ahead of schedule. Now that the Infinite Summer is officially concluded, I feel like I ought to write out some reflections on the book, after I have a bit of distance from it but before it starts to fade from my memory.

So much has been said about this book over the years, and so much has been said again on all the blogs and forums associated with the Infinite Summer project, that it’s hard to figure out a way to add much additional value. But what strikes me now about this book is something that was really catalyzed by reading it as a part of this big, virtual book group. What I find most compelling about Infinite Jest, I think, is the way it reveals the personalities and commitments of the people who read it.

Any serious novel can do that, of course, if people invest themselves seriously in it. But IJ is particularly suited to this task, for a couple of reasons. The first is the way it attempts to convey an everyday life that is recognizably ours, even if set in a hyperbolic and vaguely plausible near future: the book is about us, now, in the culture of post-modern capitalism. And because it’s such a big baggy monster, that “us” can encompass a wide range of social worlds: privileged intellectual misfits, recovering opiate addicts, political terrorists, and so on.

But the book also tends to reveal people because it is so obviously shot through with Wallace’s own passionate feelings about the way our culture revels in self-abnegation (through substances or entertainment), as well as the insights he thinks he has about how we might live better, more meaningful, more fulfilling lives. The book isn’t a parable, of course, and there’s no unambiguous moral lesson to draw from it. Yet it’s hard to escape the sense that it’s author wanted it “to make us act, and to help us live”, as Durkheim once described the purpose of religion. It’s that aspect of the work, I think, that compels people to measure it against their own moral intuitions, and to measure themselves against the book.

In sum, the book is very much a representative document of late capitalism’s Age of the Memoir: confessional, therapeutic, a bit self-absorbed, and defined by a retreat from the political, or the social, and towards the personal and the introspective. But Wallace is smart enough to recognize this retreat for what it is, and by writing fiction he escapes the memoir culture’s small-minded obsession with consistency and factuality. The result is a work of art which does far more to plumb the soul of its reader than would the work of any real-life tennis prodigy, or addict,  or Quebecois terrorist. (And lest we forget, Wallace himself was at least two of these things.)

A few examples.

Which is your favorite plot thread?

It says a lot I think, which part of the Sierpinski gasket you prefer: Hal and ETA, or Don and Ennet House, or Marathe and Steeply. To judge from the forums, Hal is the character most readers identify with, which I suppose is unsurprising. Not only is he the most fully developed character in the early parts of the story, his background (privileged) and preoccupations (intellectual, neurotic) no doubt overlap heavily with those of the book’s audience–as indeed they do with mine. But maybe that’s why I never really identified (or Identified) with Hal. He came off to me as self-involved, whiny, and above all completely unable to put his own situation in its proper context. I sort of hated him for the same reason I hate the main character in The Catcher in the Rye. In both cases, part of my revulsion is rooted, I think, in recognizing in them some unpleasant elements of my own personality.

Then there’s Don Gately. In the second half of the book, he becomes the most sympathetic character. But he is also, to me, the most intellectually challenging to the reader. Hal’s way of coping is a familiar one to the kind of educated, literary person who reads IJ: self-absorption, over-thinking, and substances. Gately, on the other hand, is Wallace’s best effort at portraying an uneducated, unintellectual person who nevertheless at least approaches being a healthy and good person. It seems at times, as though he is good because he resists self-examination–avoiding unnecessary thought being, of course, one of the cliches of AA. For a reader who is like Hal, and unlike Don, there’s no real way to completely embrace Don’s way without being either disingenuous about one’s own personality and history, or else fetishizing Don as some kind of “noble savage”. Yet we’re left with the inescapable conclusion that, if there’s any way out of the psychological traps Wallace describes, Gately is more or less it.

Then there’s the real sleeper, Marathe and Steeply. I haven’t come across anyone who says this is their favorite part of the book, and honestly it isn’t mine either. On the other hand, I did consistently find it funny and interesting, more so than the ETA sections a lot of the time. Maybe it’s because I read a lot of non-fiction and social science, so I have a high tolerance for theoretical exposition disguised as dialogue. But I also think that the writing in those segments is as richly evocative and lyrical as anything in the book, even though it’s just two guys talking to each other for hours on end. I got a kick out of simultaneously trying to picture Hugh Steeply’s absurd drag, and hear Remy Marathe’s over-the-top (and utterly non-verisimilitudinous) accent.

Wardine and yrstruly

I thought these sections were really good, and everything like that. I didn’t even find them difficult to read–once I got the hang of the dialect Wallace was going for, I could hear it in my head and the sections flowed forward quite poetically and musically.

The objections to this section seem to be of two flavors. The first, and less interesting one, comes from people who expect something different from novels than I do. Rather than an interesting challenge, they find these passages to be an affront to the reader, from an author who is more interested in making his reader work than in creating an enjoyable story. Of course, one of the major themes of the book, in my view, is that life is about much more than enjoyment, and that enjoyment can often get in the way of really living life. But that’s just what makes this reaction to these sections interesting, since others obviously didn’t interpret the book the same way I did.

The second objection stems, it seems, from a kind of political correctness, from people who find the attempt to evoke some form of African-American Vernacular English to be patronizing or offensive, a kind of minstrelsy.  This strikes me as a misreading. For one thing, there’s no reason to believe that Wallace was aiming for an accurate reproduction of any existing dialect. But more importantly, I suspect that the people accusing Wallace of being patronizing are really projecting their own prejudices about language, which I discussed in a previous post: namely, that the speech of poor people, or uneducated people, or black people, is in some “wrong” relative to the speech of people like David Foster Wallace. I also think it has to do with liberal uneasiness about race. It’s telling to me that some people apparently would have preferred Wallace to simply ignore the existence of black people, rather than trying–and maybe failing!–to represent some black people as part of his broader tableau. I didn’t see the same kind of hand-wringing about Don Gately, after all, even though Wallace’s childhood was a pretty long way away from Bimmy’s.

The Incredible Randy Lenz

For my money, Randy Lenz is by far the most interesting of the minor characters. He is totally repulsive, of course. On that absolutely everyone seems to agree. But the way people react to him can be tremendously revealing: the Infinite Summer forum thread about Lenz is a captivating display of this.

One immediate divide concerns Lenz’s animal torture: some people find it unspeakably horrible, more so than just about anything else in the book, to the point that they find themselves driven away from the book by it. Others (like me) don’t see how Lenz’s works can be worse than some of the horrible things that happen to human beings in the novel; we worry about a tendency to privilege the lives of “innocent” animals over those of less pure-seeming humans. For me, Lenz’s total and utter debasement wasn’t really driven home until the last scene he appears in, where we find him cutting off Poor Tony’s fingers in the hope of appeasing the AFR and getting another look at the Entertainment.

The other interesting thing about Lenz is the way he brings out people’s Manichaean tendencies. Most of the other characters in the books seem to provoke some level of sympathy from readers. Indeed, the way a character like Gately is written seems calculated to make you care about and root for him, despite the fact that he was directly responsible for more than one man’s death and allowed another to die because of his cowardice and addiction. But Lenz is an exception: a lot of the discussion of him takes for granted that he is some kind of pure evil, in contrast to the complex and conflicted characters that populate the rest of the book.

This, to me, is not the point of Lenz’s character at all. I see him as a necessary complement to the Gately character, a man who has to be seen as occupying the same continuum of addiction as all the other Ennet House residents. If we only had Gately, the story would be too simplistically uplifting: it doesn’t matter that your mom was a drunk who got beaten everyday, or that you’re a serious Demerol addict with no real prospects–just Take It One Day At A Time and you’ll be OK! The point of Lenz, it seems to me, is that not everyone escapes from the various personal and social traumas that lead us to destroy ourselves. Not because they are weak or evil people, but because the self-destructive force of addiction and trauma are so great. In a different way, Poor Tony is an example of this too–and I don’t think it’s an accident that he and Lenz are together in their last scene. To put it another way, it seems to me that the only way to fully sympathize with, and Identify with, Don Gately, the only way to fully appreciate the difficulty of his struggle, is to recognize that any of us could, under a certain set of circumstances, be pulled as low as Randy Lenz.

Infinite Jest isn’t the best book I’ve ever read, but it is one that will stay with me. I was going to write that I found it more emotionally affecting than other things I’ve read lately, but that isn’t quite right. What it is, is emotionally challenging. That’s what really makes it a different kind of “big book” from, say, Gravity’s Rainbow or Ulysses, which are primarily intellectual challenges.  This is a book that really made it hard to maintain a pose of emotional detachment or ironic distance–which, in these times, is a real achievement.

Trans-Europe Express

September 21st, 2009  |  Published in Art and Literature, Work

Compare and contrast:

“Work is where they find their real fulfilment–running an investment bank , designing an airport, bringing on stream a new family of antibiotics. If their work is satisfying people don’t need leisure in the old-fashioned sense. No one ever asks what Newton or Darwin did to relax, or how Bach spent his weekends. At Eden-Olympia work is the ultimate play, and play the ultimate work.”  –J.G. Ballard, Super-Cannes

“Old premise: work sucks, and after decades of toil, one has “earned the right” to get paid to do nothing. New premise: work is self-defined, self-led and empowering. Small-scale and global-reach entrepreneurship is a reality and this will make work a joy rather than a painful necessity.” -Pascal-Emmanuel Gobry, The American Scene

The libertarian right assures us that the preceding is a description of utopia.

This message intellectually sponsored by the Work Less Party.

Quantity, Quality, Social Science

September 17th, 2009  |  Published in Social Science, Statistics

Henry Farrell expresses the duality of social scientific thought by invoking a passage from one of my favorite books, Calvino’s Invisible Cities. The comments spin out the eternal quantitative vs. qualitative research debate, in both more and less interesting permutations.

Historically and philosophically, the whole qual-quant divide is an important object of social science, since it is itself a consequence of the same process of modernity and capitalist development that produces social science itself. It is only when society and its institutions appear at a scale too large for the human mind to grasp all at once that we require abstractions–particularly statistical and mathematic ones–to simplifiy and describe our social world to ourselves.

Within academia, however, there is a seemingly inescapable sense that qualitative and quantitative epistemologies are locked in some kind of zero-sum competition. These days people like to talk about “mixed methods”, but I agree with some commenters in the above thread that this too often amounts to doing a quantitative study and then using qualitative examples (from interviews or ethnography or whatever) as examples or window dressing.

It seems to me that a lot of this is driven by a misapprehension about what either approach is really good for.  The problem is that we expect quantitative and qualitative approaches to do the same kind of thing; that is, to collect data and use them to test well-defined hypotheses. I find that quantitative approaches are generally quite useful for taking well-defined concepts, and reasonably precise operationalizations of those concepts, and testing the interrelations between them. If your question is “do high tax rates inhibit economic growth”, and you have acceptable definitions and data for the subject and object of that hypothesis, then you can make useful–though never definitive–inferences using quantitative methods.

Qualitative methods are less often (though sometimes) suited to this kind of thing, because they are by nature rooted in the idiosyncracies of specific cases and hence are difficult to generalize. What qualitative work is really good for, I think, is in generating concepts. Quantitative analysis presupposes a huge conceptual apparatus: from the way ideas are operationalized, to the way survey questions are written, to the way variables are defined, to the way models are parameterized. Some of these presuppositions can be adjusted in the course of an analysis, but others are deeply encoded in the information we use. If  you want to know whether the categories of a “race” variable are appropriate, the best strategy is probably a qualitative one, which will examine how racial categories are experienced by people, and how they operate in everyday life. Likewise, new hypotheses can arise from “thick description” which would not be apparent from consulting large tables of numbers.

This, however, brings up an issue that will probably be uncomfortable for a lot of qualitative social scientists, particular those who are concerned with defending the “scientific” credentials of their work. Namely, can we draw a clear boundary between qualitative social science, journalism, and even fiction, with regards to their utility for driving the concept-formation process? Social science typically differentiates itself from mere journalism by its greater rigour; yet in my reading, the kind of rigour which is most important to qualitative work will be its interpretive rigour, rather than its precision in research design and data-gathering. Whether one is starting with ethnographic field notes or with The Wire, the point is to draw out and develop concepts and hypotheses in a sufficiently precise way that they can be tested with larger-scale (which is to say, generally quantitative) empirical data.

To put things this way seems to slide into a kind of cultural studies, except that the latter tends to set itself up as oppositional, rather than complementary, to quantitative empirical work. We would do far better, I think, to recognize that data analysis without qualitative conceptual interpretation is sterile and stagnant, while qualitative analysis without large-scale empiricism will tend to be speculative and inconclusive.

The Game Beyond the Game

September 10th, 2009  |  Published in Art and Literature, Cities, Politics, Social Science, Sociology

The new issue of City and Community has an article by Peter Dreier and John Atlas about a show that captivates many an urban sociologist, The Wire. Their piece extends comments they made last year in Dissent, in a symposium about the show. In both pieces, they repeat the common accusation that the show is nihilistic, because it presents urban problems but doesn’t show any solutions to them. To bolster the point, they dredge up a quotation from an interview, in which Simon proclaims that meaningful change is impossible “within the current political structure”.

As a corrective to what they see as The Wire‘s shortcomings, Dreier and Atlas catalogue some of the real community activists who have struggled against injustice in Baltimore, and won some small victories. And these are indeed inspiring and courageous people, who have managed to win some real improvements in people’s lives. But by bringing them up and presenting them as the solution to all the problems The Wire portrays, I think Dreier and Atlas miss the point of what David Simon and Ed Burns are doing with the show.

It’s misleading to say that The Wire is nihilistic. It’s true that the problems it portrays appear, within the context of the narrative, to be insoluble. And it may even seem, initially, as though the show is sympathetic to a conservative position: the poor will always be with us, government intervention always makes things worse, so we might as well just give up and try to make things better in our own small, individualist way. But this would be a profound misreading, because the show suggests, not that there are no solutions, but something far more complex. We come to understand, as the seasons unfold, that each of the dysfunctional institutions we see is embedded in a larger system that goes far beyond the scale of Baltimore. There is, as Stringer Bell puts it in season 3, “a game beyond the game”. We therefore have to conclude, not that there are no solutions, but that there may be no solutions at the scale of a single city.

The police find themselves hamstrung by their need to deal with national agencies like the FBI, which has been caught up in the mania of the “war on terror”. The dockworkers find their way of life destroyed by the automation and the transformation of the global shipping industry. The mayor is at the mercy of Maryland state politics because he needs funding. The local newspaper struggles, and fails, to adjust to a world of profit-driven news and competition from new media. Even the drug dealers are at the mercy of their out-of-town “connect”.

None of this implies that Baltimore’s doom is inevitable. Neither imperialism, nor neoliberalism, nor Republican domination of state politics, nor the tabloidization of all journalism are inevitable. If they seem that way on the show, it is because of the careful and clever way in which the story is framed: these larger-scale institutions, the ones where the real agency lies, are always kept off screen and held beyond the reach of the characters. Thus the world the characters inhabit appears to them to be one where nothing can be changed. That doesn’t mean that the world of the show, that we viewers can sense, is actually so tragic.

But is true that none of these problems can be solved in a single city, and most of them require a long-term, and fairly radical project of social transformation. This may present difficulties for liberals who would prefer that social problems have incremental, non-threatening solutions. But by presenting small-scale local activism as an adequate response, Dreier and Atlas do a disservice both to the problems they address, and to the activists themselves.

Perhaps, however, their real political objective is somewhat different from simply promoting the importance of urban collective action. The giveaway comes at the end of the City and Community version of their essay:

Perhaps, a year or two from now, Simon or another writer will propose a new series to TV networks about the inner workings of the White House and an idealistic young president, a former community organizer, who uses his bully pulpit to mobilize the American people around their better instincts. This president would challenge the influence of big business and its political allies, to build a movement, a New Deal for the 21st century, to revitalize an economy brought to its knees by Wall Street greed, address the nation’s health care and environmental problems, provide adequate funding for inner-city schools, reduce poverty and homelessness, and strengthen the power of unions and community groups.

A show like that would certainly be a nice bit of wish-fulfilment for liberals who like to imagine a “great man” riding in and fulfilling all their fantasies. But it’s unclear what has to do with our world, in which an ambitious young politician used his charisma and the wishful thinking of his base to ride to power, and then proceeded to cater to the needs of bankers and insurance companies while sinking America ever deeper into an intractable war in Afghanistan. Faced with that reality, the world of The Wire doesn’t look so nihilistic or unrealistic after all.

Never been in a (language) riot

September 7th, 2009  |  Published in Art and Literature, Social Science

I just got through a summer-long reading of David Foster Wallace’s Infinite Jest, which has consequently invaded all my waking thoughts. Among the conceits of the book is that a character, one Avril Incandenza, is fanatical about proper grammatical usage to the point of helping to incite the “M.I.T. language riots” at some point in the early 21st century.

As an undergraduate, I studied linguistics, so I was unbelievably tickled by Avril’s character. There are even references to Montague grammar, the logical formalisms and lambda-calculus of which I remember well, and whose descendants took up an unhealthy amount of my collegiate time.

But the funniest thing about Avril’s character is how exactly contrary she is to everything I know about really existing American academic linguistics. This, after all, is a woman who does things like  replacing commas with semicolons on public signage and correcting “they” to “he or she” in her son’s speech. Yet the one thing that has stuck with me from my linguistic education is the idea that these kinds of rules are totally meaningless and stupid.

We used to talk about prescriptive and descriptive linguistics. (Wallace was no doubt aware of this, as he had Avril be a member of the “prescriptive grammarians of Massachusetts.) Prescriptive grammar meant telling people how they were supposed to use language, like your elementary school teacher telling you not to say “ain’t” or warning you against ending sentences with prepositions. Descriptive grammar, by contrast, was what real scientific linguists did. Its premise was that whatever people actually said was the real language, and it was our job to document that. All of the prescriptive rules were just superstitions or attempts by privileged social strata to make their way of speaking seem more “correct” than that of the less advantaged.

Now that I’ve slid over into a new career as a social scientist, I find that I’m all the more committed to this prescriptivist dogma, and I newly appreciate its sociological sophistication. All too many social scientists, who are otherwise eager to acknowledge the role of social construction and power relations in making our social world, nevertheless accept the reality and the usefulness of grammatical rules. Whereas even the most apolitical of the linguists I have known would dismiss such rules in an instant as irrational prescriptivism.

But it turns out that what I see as the only sensible way of understanding language is still very much a minority view.  And this always surprises me. It’s not that I’m unaccustomed to holding unpopular views; I am after all, a socialist. But somehow the language issue seems like it should be more common-sense, less divisive. And then I read something like this, from an otherwise excellent Infinite Jest-related blog:

My argument is that as long as we agree that there are standards of grammar and spelling that we should aspire to (and most of us do agree), deviations will be seen as ignorance and possibly reflect poorly on the intelligence and abilities of the writer and therefore should be corrected. Since when is pointing out people’s mistakes the same as telling them you think they are second-class human beings?

Well, as regards the parenthetical assumption: I do not agree! And I find it slightly appalling that others do agree. It’s not even that, in practice, I disagree with this author’s advice. I can understand advocating prescriptive grammar in the same way that one would advocate, say, wearing a tie to a job interview: it may not make sense, it may not have anything to do with anything, but it’s what people expect and sometimes it’s best to just go with the flow and accede to the demands of the social structure.  The “will be seen as” in the sentence above suggests that kind of argument. But I get the sense that this is not how prescriptive grammarians feel, even smart and educated ones. They think that obeying pointless grammar rules really is somehow indicative of one’s intelligence or self-discipline or whatever.

What a waste. Not only does prescriptive grammar reinforce class hierarchies, it cuts educated and affluent people off from the richness, dynamism, and power of everyday American language.  Even if there weren’t all the other objections I’ve already adduced, there’d be this: in traditional upper-class white American English, there is no word for wack.

The ontology of statistics

September 4th, 2009  |  Published in Social Science, Statistics

Bayesian statistics is sometimes differentiated from its frequentist alternative with the claim that frequentists have a kind of platonist ontology, which treats the parameters they seek to estimate as being fixed by nature; Bayesians, in contrast, are said to hold a stochastic ontology in which there is variability “all the way down”, as it were. This distinction implies that frequentist measurements of uncertainty refer solely to epistemological uncertainty:  if we estimate that a certain variable has a mean of 50 and a standard error of two, we are saying only that we do not have enough information to specify the mean more precisely. In contrast, the Bayesian perspective (according to the view just elucidated) would hold that a measure of uncertainty includes not only epistemological but also ontological uncertainty: even with a sample size approaching infinity, the mean of the variable in question is the realization of some probability distribution and not a fixed quantity, and therefore can never be specified without uncertainty.

As regards the frequentist-Bayesian distinction, the above distinction is misleading and unhelpful.  Andrew Gelman is, by any sensible account, one of the leading exponents and practitioners of Bayesian statistics, and yet he says here that “I’m a Bayesian and I think parameters are fixed by nature. But I don’t know them, so I model them using random variables.” Compare this to the comment of another Bayesian, Bill Jefferys: “I’ve always regarded the main difference between Bayesian and classical statistics to be the fact that Bayesians treat the state of nature (e.g., the value of a parameter) as a random variable, whereas the classical way of looking at it is that it’s a fixed but unknown number, and that putting a probability distribution on it doesn’t make sense.”

For Gelman, the choice of Bayesian methods is not primarily motivated by ontological commitments, but is rather a kind of pragmatism: he adopts techniques such as shrinkage estimators, prior distributions, etc. because they give good predictions about the state of the world in cases where frequentist methods fail or cannot be applied. This, I suspect, corresponds to the inclinations and motivations of many applied researchers, who as often as not will be uninterested in the ontology implied by their methods, so long as the techniques give reasonable answers.

Moreover, if it is possible to be a Bayesian with a Platonist ontology, it is equally possible to wander into a stochastic view of the world without reaching beyond the well-accepted “classical” methods. Consider, for example, the logistic regression, which is by now a part of routine introductory statistical instruction in every field of social science.  A logistic regression model does not directly predict a binary outcome y, which can be 0 or 1. Rather, it predicts the probability of such an outcome, conditional on the predictor variables.  There are two ways to think about such models. One of them, the so-called “latent variable” interpretation, posits that there is some unobservable continuous variable Z, and that the outcome y is 0 if this Z variable is below a certain threshold, and 1 otherwise. If one holds to this interpretation, it is perhaps possible to hold to a Platonist ontology, by stipulating that the value of Z is “fixed by nature”. However, this fixed parameter is at the same time unobservable, leading to the unsatisfying conclusion that a the propensity of event y occuring for a given subject is at once fixed and unknowable.

In the latent variable interpretation,  the predicted probabilities generated by a logistic regression are simply emanations from the “true” quantity of interest, the unobserved value of Z. An alternative interpretation is that the predicted probabilities are themselves the quantities of interest. Ontologically, this means that rather than having an exact value for Z, each case is associated with a certain probability that for that case, y=1. Of course, in the actual world we observe, each case in our dataset is either 1 or 0. But this second interpretation of the model implies that if we “ran the tape of history over again”, to paraphrase Stephen Jay Gould, the values of y for each individual case might be different; only the overall distribution of probabilities is assumed to be constant.

Thus the distinction between the Platonist and stochastic ontologies in statistics turns out to be quite orthogonal to the distinction between frequentist and Bayesian. And it is an important distinction to be aware of, because it has real practical implications for applied researchers.  It will affect, for example, the way in which we assess how well a model fits the data.

In the case of logistic regression, the Platonist view would imply that the best model possible would predict every case correctly: that is, it would yield a predicted probability of more than 0.5 when y=1, and less than 0.5 when y=0. On the stochastic view, however, that degree of predictive accuracy is a priori held to be impossible, and achieving it indicates overfitting of the model. The best one can really aim for, on this view, is a model which gets the probabilities right–so that for 10 cases with predicted probabilities of 0.1, there should should be one case where y=1 and nine where y=0.

This conundrum arises even for Ordinary Least Squares regression, even though in that case the outcome variable is continuous and the model predicts it directly. It has long been traditional to assess OLS model fit using R-squared, the proportion of variance explained by the model. Many people unthinkingly assume that because the theoretical upper bound of the R-squared statistic is 1, the maximum possible value in any particular empirical situation is also 1. But this assumption 0nce again rests on an implicit Platonist ontology. It assumes that sigma, the residual standard error of a regression, reflects only omitted variables rather than inherent variability in the outcome in question. But as Gary King observed a long time ago, if some portion of sigma is due to intrinsic, ontological variability, then the maximum value of R-squared is some unknown value less than 1.* In this case, once again, high values of R-squared may be indicators of overfitting rather than signs of a well-constructed model.

Statistics, even in its grubbiest, most applied forms, is philosophical; we ignore that aspect of quantitative practice at our peril. I am put in mind of Keynes’ remark about economic common sense and theoretical doctrine, which I will not repeat here as it is already ubiquitous.

*In practice, the residual variability may truly ontological in the sense that it is rooted in the probabilistic behavior of the physical world at the level of quantum mechanics, or it may be that all variation can be accounted for in principle, but that residual variation is irreducible in practice, because of the exteremely large number of very minor causes that contribute to the outcome. In either case, the consequence for the applied researcher is the same.

http://www.stat.columbia.edu/~cook/movabletype/archives/2007/12/intractable_is.html#more