Never been in a (language) riot

September 7th, 2009  |  Published in Art and Literature, Social Science  |  1 Comment

I just got through a summer-long reading of David Foster Wallace's Infinite Jest, which has consequently invaded all my waking thoughts. Among the conceits of the book is that a character, one Avril Incandenza, is fanatical about proper grammatical usage to the point of helping to incite the "M.I.T. language riots" at some point in the early 21st century.

As an undergraduate, I studied linguistics, so I was unbelievably tickled by Avril's character. There are even references to Montague grammar, the logical formalisms and lambda-calculus of which I remember well, and whose descendants took up an unhealthy amount of my collegiate time.

But the funniest thing about Avril's character is how exactly contrary she is to everything I know about really existing American academic linguistics. This, after all, is a woman who does things like  replacing commas with semicolons on public signage and correcting "they" to "he or she" in her son's speech. Yet the one thing that has stuck with me from my linguistic education is the idea that these kinds of rules are totally meaningless and stupid.

We used to talk about prescriptive and descriptive linguistics. (Wallace was no doubt aware of this, as he had Avril be a member of the "prescriptive grammarians of Massachusetts.) Prescriptive grammar meant telling people how they were supposed to use language, like your elementary school teacher telling you not to say "ain't" or warning you against ending sentences with prepositions. Descriptive grammar, by contrast, was what real scientific linguists did. Its premise was that whatever people actually said was the real language, and it was our job to document that. All of the prescriptive rules were just superstitions or attempts by privileged social strata to make their way of speaking seem more "correct" than that of the less advantaged.

Now that I've slid over into a new career as a social scientist, I find that I'm all the more committed to this prescriptivist dogma, and I newly appreciate its sociological sophistication. All too many social scientists, who are otherwise eager to acknowledge the role of social construction and power relations in making our social world, nevertheless accept the reality and the usefulness of grammatical rules. Whereas even the most apolitical of the linguists I have known would dismiss such rules in an instant as irrational prescriptivism.

But it turns out that what I see as the only sensible way of understanding language is still very much a minority view.  And this always surprises me. It's not that I'm unaccustomed to holding unpopular views; I am after all, a socialist. But somehow the language issue seems like it should be more common-sense, less divisive. And then I read something like this, from an otherwise excellent Infinite Jest-related blog:

My argument is that as long as we agree that there are standards of grammar and spelling that we should aspire to (and most of us do agree), deviations will be seen as ignorance and possibly reflect poorly on the intelligence and abilities of the writer and therefore should be corrected. Since when is pointing out people's mistakes the same as telling them you think they are second-class human beings?

Well, as regards the parenthetical assumption: I do not agree! And I find it slightly appalling that others do agree. It's not even that, in practice, I disagree with this author's advice. I can understand advocating prescriptive grammar in the same way that one would advocate, say, wearing a tie to a job interview: it may not make sense, it may not have anything to do with anything, but it's what people expect and sometimes it's best to just go with the flow and accede to the demands of the social structure.  The "will be seen as" in the sentence above suggests that kind of argument. But I get the sense that this is not how prescriptive grammarians feel, even smart and educated ones. They think that obeying pointless grammar rules really is somehow indicative of one's intelligence or self-discipline or whatever.

What a waste. Not only does prescriptive grammar reinforce class hierarchies, it cuts educated and affluent people off from the richness, dynamism, and power of everyday American language.  Even if there weren't all the other objections I've already adduced, there'd be this: in traditional upper-class white American English, there is no word for wack.

The ontology of statistics

September 4th, 2009  |  Published in Social Science, Statistics

Bayesian statistics is sometimes differentiated from its frequentist alternative with the claim that frequentists have a kind of platonist ontology, which treats the parameters they seek to estimate as being fixed by nature; Bayesians, in contrast, are said to hold a stochastic ontology in which there is variability "all the way down", as it were. This distinction implies that frequentist measurements of uncertainty refer solely to epistemological uncertainty:  if we estimate that a certain variable has a mean of 50 and a standard error of two, we are saying only that we do not have enough information to specify the mean more precisely. In contrast, the Bayesian perspective (according to the view just elucidated) would hold that a measure of uncertainty includes not only epistemological but also ontological uncertainty: even with a sample size approaching infinity, the mean of the variable in question is the realization of some probability distribution and not a fixed quantity, and therefore can never be specified without uncertainty.

As regards the frequentist-Bayesian distinction, the above distinction is misleading and unhelpful.  Andrew Gelman is, by any sensible account, one of the leading exponents and practitioners of Bayesian statistics, and yet he says here that "I'm a Bayesian and I think parameters are fixed by nature. But I don't know them, so I model them using random variables." Compare this to the comment of another Bayesian, Bill Jefferys: "I've always regarded the main difference between Bayesian and classical statistics to be the fact that Bayesians treat the state of nature (e.g., the value of a parameter) as a random variable, whereas the classical way of looking at it is that it's a fixed but unknown number, and that putting a probability distribution on it doesn't make sense."

For Gelman, the choice of Bayesian methods is not primarily motivated by ontological commitments, but is rather a kind of pragmatism: he adopts techniques such as shrinkage estimators, prior distributions, etc. because they give good predictions about the state of the world in cases where frequentist methods fail or cannot be applied. This, I suspect, corresponds to the inclinations and motivations of many applied researchers, who as often as not will be uninterested in the ontology implied by their methods, so long as the techniques give reasonable answers.

Moreover, if it is possible to be a Bayesian with a Platonist ontology, it is equally possible to wander into a stochastic view of the world without reaching beyond the well-accepted "classical" methods. Consider, for example, the logistic regression, which is by now a part of routine introductory statistical instruction in every field of social science.  A logistic regression model does not directly predict a binary outcome y, which can be 0 or 1. Rather, it predicts the probability of such an outcome, conditional on the predictor variables.  There are two ways to think about such models. One of them, the so-called "latent variable" interpretation, posits that there is some unobservable continuous variable Z, and that the outcome y is 0 if this Z variable is below a certain threshold, and 1 otherwise. If one holds to this interpretation, it is perhaps possible to hold to a Platonist ontology, by stipulating that the value of Z is "fixed by nature". However, this fixed parameter is at the same time unobservable, leading to the unsatisfying conclusion that a the propensity of event y occuring for a given subject is at once fixed and unknowable.

In the latent variable interpretation,  the predicted probabilities generated by a logistic regression are simply emanations from the "true" quantity of interest, the unobserved value of Z. An alternative interpretation is that the predicted probabilities are themselves the quantities of interest. Ontologically, this means that rather than having an exact value for Z, each case is associated with a certain probability that for that case, y=1. Of course, in the actual world we observe, each case in our dataset is either 1 or 0. But this second interpretation of the model implies that if we "ran the tape of history over again", to paraphrase Stephen Jay Gould, the values of y for each individual case might be different; only the overall distribution of probabilities is assumed to be constant.

Thus the distinction between the Platonist and stochastic ontologies in statistics turns out to be quite orthogonal to the distinction between frequentist and Bayesian. And it is an important distinction to be aware of, because it has real practical implications for applied researchers.  It will affect, for example, the way in which we assess how well a model fits the data.

In the case of logistic regression, the Platonist view would imply that the best model possible would predict every case correctly: that is, it would yield a predicted probability of more than 0.5 when y=1, and less than 0.5 when y=0. On the stochastic view, however, that degree of predictive accuracy is a priori held to be impossible, and achieving it indicates overfitting of the model. The best one can really aim for, on this view, is a model which gets the probabilities right--so that for 10 cases with predicted probabilities of 0.1, there should should be one case where y=1 and nine where y=0.

This conundrum arises even for Ordinary Least Squares regression, even though in that case the outcome variable is continuous and the model predicts it directly. It has long been traditional to assess OLS model fit using R-squared, the proportion of variance explained by the model. Many people unthinkingly assume that because the theoretical upper bound of the R-squared statistic is 1, the maximum possible value in any particular empirical situation is also 1. But this assumption 0nce again rests on an implicit Platonist ontology. It assumes that sigma, the residual standard error of a regression, reflects only omitted variables rather than inherent variability in the outcome in question. But as Gary King observed a long time ago, if some portion of sigma is due to intrinsic, ontological variability, then the maximum value of R-squared is some unknown value less than 1.* In this case, once again, high values of R-squared may be indicators of overfitting rather than signs of a well-constructed model.

Statistics, even in its grubbiest, most applied forms, is philosophical; we ignore that aspect of quantitative practice at our peril. I am put in mind of Keynes' remark about economic common sense and theoretical doctrine, which I will not repeat here as it is already ubiquitous.

*In practice, the residual variability may truly ontological in the sense that it is rooted in the probabilistic behavior of the physical world at the level of quantum mechanics, or it may be that all variation can be accounted for in principle, but that residual variation is irreducible in practice, because of the exteremely large number of very minor causes that contribute to the outcome. In either case, the consequence for the applied researcher is the same.

http://www.stat.columbia.edu/~cook/movabletype/archives/2007/12/intractable_is.html#more

One last time

May 9th, 2009  |  Published in Data, Social Science, Statistical Graphics

Final thing on the car-culture regression. Below is a comparison of the actual data on Vehicle Miles Traveled with my reconstruction of Nate Silver's model, and my model including lagged gas prices, housing prices, and the stock market.

Comparing two models of American driving habits

Comparing two models of American driving habits

I "seasonally adjusted" the miles data by fitting a model predicting miles based only on the month of the year. The miles data (whether the actual data or the prediction from a model) is then corrected by subtracting the coefficient for the month it was collected. This data is normalized according the level of driving in April.

An even better fit is possible with a more complex model that includes a) average monthly temperatures and b) an interaction between gas prices and time. But this simpler model suffices to show that Silver's original finding was probably an artifact of his failure to control for wealth effects and the lagged effect of gas prices.

The lesson, I suppose, is: beware of columnists on deadline bearing regressions!

Predictin'

May 9th, 2009  |  Published in Data, Social Science

Update to the post below: I decided to see how well my model will predict miles traveled going forward. My model only includes data through January, as Nate Silver's did. But we have the data through February now, so we can see how well the model works there. We also have almost all the data needed to predict March--the only thing missing is the government's Housing Price Index. But that doesn't change too much month to month, so I made a prediction based on the February value:

           Predicted    Actual
February     215.37     215.77
March        245.31     ??

The March numbers should be out soon, so we'll see how my model performs.

Moment of Zen

May 8th, 2009  |  Published in Data, Social Science, Statistical Graphics

Here are the variables I used in the models for the previous post. Simplistic social theories are left as an excercise for the reader.

Economic variables, 1990-2009

Economic variables, 1990-2009

Attempt to Regress

May 8th, 2009  |  Published in Data, Social Science, Statistical Graphics

I'm loathe to say an unkind word about Nate Silver. Besides boosting the profile of my alma mater, he's done more than anyone else to improve the reputation and sexiness of my present occupation: statistical data analyst. This is all the more welcome at a time when other people are blaming statistical models for, well, ruining everything.

But I confess to being a bit annoyed when I read Silver's recent article about the changes in American driving habits. In that article, Silver argues that we're seeing a real shift away from car culture, based on the following:

I built a regression model that accounts for both gas prices and the unemployment rate in a given month and attempts to predict from this data how much the typical American will drive. The model also accounts for the gradual increase in driving over time, as well as the seasonality of driving levels, which are much higher during the summer than during the winter.

All well and good, except that Silver doesn't provide the model or the data! He asks us to take his word for it that in January, Americans "drove about 8 percent less than the model predicted."

Now, I don't expect anyone to publish regression coefficients in Esquire magazine, but Silver does have a rather well-known website, so he could have put it there. The analysis was already done and published, so I don't see how it would have hurt Silver to publish the data after the fact. Which is what makes me suspect that he kept things deliberately vague in order to maintain a sense of mystery and awe around his regression models. Particularly because in this case, the underlying model is actually quite simple.

Which is a shame, because the simplicity of the model is actually the most appealing thing about it. It's a great example of a situation where a regression illuminates a relationship that would be really hard to discern using simple descriptive statistics. The model is a perfect balance between being simple enough to be believable, and complex enough to really gain you something over simple descriptives. In fact, it's something that I plan to refer to in the future when my less quant-y friends question the need for regressions.

Which is why I decided to recreate Silver's analysis from scratch, which took me about an hour. First I had to figure out what Silver's model was. Based on the paragraph above, I decided on:

miles = gas + unemployment + date + month

Monthly miles driven are modeled as a function of that month's average gas prices, the unemployment rate in that month, the date, and which month of the year it is. The date variable will capture the "gradual increase" in miles traveled. I use month to capture the "seasonality of driving levels". I could have grouped the months into seasons, but why not use a more precise measure if you've got it?

The next step was to find the data: From different sources, I obtained data on miles traveled, gas prices, and unemployment. All of these sources start around 1990, so that's the time frame we'll have to work with.

With that in hand, it was time for some analysis. Using R, I combined the different data sources and ran myself a regression:

lm(formula = miles ~ unemp + price + date + month)
                coef.est coef.se
(Intercept)     98.52     3.71
unemp           -2.09     0.34
gasprice        -0.08     0.01
date             0.01     0.00
monthAugust     17.90     1.40
monthDecember   -8.82     1.40
monthFebruary  -30.26     1.42
monthJanuary   -22.03     1.40
monthJuly       17.87     1.42
monthJune       11.34     1.42
monthMarch       0.42     1.42
monthMay        12.56     1.42
monthNovember  -10.00     1.40
monthOctober     5.85     1.40
monthSeptember  -2.55     1.40
---
n = 222, k = 15
residual sd = 4.25, R-Squared = 0.98

That R-Squared of 0.98 means that about 98% of the actual variation in miles traveled is explained by the variables in this model. So it's a pretty comprehensive picture of the things that predict how much Americans will drive. A one point increase in the unemployment rate, in this model, predicts a 2.09 billion mile decrease in miles driven. And gas prices are in cents, so a one-cent increase in the price of gas will, all things being equal, translate into an 80 million mile decrease in miles driven.

The next step was to check out Silver's assertion that recent data on miles driven is lower than the model would predict. Recall that Silver's model over-predicted January miles driven by 8 percent. My model predicts that in January, Americans should have driven 239.6 billion miles. The actual number was 222 billion miles. The prediction is--wait for it--7.9 percent more than the actual number! That's pretty amazing actually, and it indicates that my data and model must be pretty damn close to Silver's.

With the model in hand, however, we can do a bit better than this. Below is a chart showing how close the model was for every month in my dataset. It's similar to the graphic accompanying Silver's Esquire article, only not as ugly and confusing.

Comparison of a regression model of vehicle miles driven with the actual value

Comparison of a regression model of vehicle miles driven with the actual value

The graph shows the difference between the prediction and the actual number. When the point is above the zero line, it means people drove more than the model would predict. When it's below the line, they drove less.

You can see here that there are multiple imperfections in the model. Mileage declined a little faster than predicted in the late 90's, and then rose faster than expected in the early 2000's. It's possible that this has something to do with a policy difference between the Bush and Clinton administrations, but I'm not enough of an expert to say.

What jumps out, though, are those last three points on the right, corresponding to this past November, December, and January. All of them are way off the prediction, and the error is bigger than for any other time period. This strongly suggests that something really has changed. What's not totally clear, though, is whether it's the car culture that's different, or whether it's this recession that's unlike the other two recessions in this data set (the early 90's and early 2000's).

The next logical step is to consider some additional variables. Some commenters at Nate's site pointed out that you might want to factor in changes in wealth--as opposed to changes in income, which are at least partly captured by the unemployment variable. Directly measuring wealth is a little tricky, but we can easily measure two things that are proxies for wealth, or people's perceptions of wealth: the stock market and the housing market. So I went google-hunting again and found two more variables: the monthly closing of the Dow, and the government's housing price index. Put those into the regression, and away we go:

lm(formula = miles ~ unemp + price + date + stocks + housing + month)
               coef.est coef.se
(Intercept)    117.87     4.13
unemp           -1.64     0.48
gasprice        -0.11     0.01
date             0.01     0.00
stocks           1.01     0.30
housing          0.24     0.03
monthAugust     18.40     1.20
monthDecember   -8.88     1.21
monthFebruary  -30.58     1.21
monthJanuary   -22.12     1.19
monthJuly       18.28     1.20
monthJune       11.74     1.20
monthMarch       0.30     1.20
monthMay        12.77     1.20
monthNovember  -10.02     1.21
monthOctober     6.42     1.21
monthSeptember  -1.92     1.21
---
n = 217, k = 17
residual sd = 3.60, R-Squared = 0.98

R-squared looks the same, but the residual standard deviation is lower, which indicates that this model predicts more of the variation in the data than the last one. And the new variables both have pretty big and statistically significant effects. The stock market close is scaled in thousands, so the coefficient indicates that for every 1000 point increase in the Dow, we drive 1 billion more miles. The housing price index defines 1991 prices as 100, and went into the 220's during the bubble. Every one point increase in that index predicts a 240 million mile increase in driving.

Here's another version of the graph above, for our new model:

Predicted and actual miles, from a model with stock and housing prices

Predicted and actual miles, from a model with stock and housing prices

The same patterns are still present, but the divergence between the predictions and the actual numbers is smaller now. (Incidentally, I have no idea what happened in January of 1995. Did everyone go on a road trip without telling me?) It still looks like there's been some qualitative change in US driving habits recently, but the case is less clear cut. In particular, the late 90's now looks like another outstanding mystery. Mileage declined by more than the model expected then, but why? At the moment I have no particular hypothesis about that.

My final model tests something else that appears in Nate's article:

There is strong statistical evidence, in fact, that Americans respond rather slowly to changes in fuel prices. The cost of gas twelve months ago, for example, has historically been a much better predictor of driving behavior than the cost of gas today. In the energy crisis of the early 1980s, for instance, the price of gas peaked in March 1981, but driving did not bottom out until a year later.

OK, so let's try using the price of gas 12 months ago as a predictor along with current prices. This will force us to throw away a bit of data, but we can still fit a model on most of the data points:

lm(formula = miles ~ unemp + price + price12 + date + stocks +
 housing + month, data = data)
 coef.est coef.se
(Intercept)    112.28     3.82
unemp           -0.93     0.42
gasprice        -0.07     0.01
gasprice12      -0.08     0.01
date             0.01     0.00
stocks           0.93     0.26
housing          0.25     0.02
monthAugust     18.19     1.04
monthDecember   -8.99     1.05
monthFebruary  -31.26     1.06
monthJanuary   -22.20     1.05
monthJuly       18.17     1.05
monthJune       11.58     1.05
monthMarch       0.10     1.06
monthMay        12.88     1.05
monthNovember  -10.06     1.04
monthOctober     6.29     1.04
monthSeptember  -2.08     1.04
---
n = 210, k = 18
residual sd = 3.07, R-Squared = 0.99

It looks like current gas prices and last year's gas prices are about equivalent in their effect on mileage. Now let's look at the graph of prediction error again:

Miles driven, predicted and actual, third model

Miles driven, predicted and actual, third model

Lo and behold, the apparently anomalous findings from the last few months have disappeared. This isn't the last word, of course, nor is it the perfect model. But it no longer appears that US driving behavior is so unusual, when you account for all the relevant economic contextual factors.

Anyhow, that's enough playing around in the data for me for the time being. In the end, this whole exercise helped me understand what I like best about Nate Silver's work. He's inventing a new media niche, call it "statistical journalist". He uses publicly available data to produce quick, topical analysis that illuminates the issues of the data in the way neither anecdotes nore naive recitations of descriptive statistics can. He may play fast and loose at times, but his methods are transparent enough that people like me can still check up on him. I certainly hope that this kind of writing becomes an established sub-specialty with a wider base of practitioners than just Silver himself.

Graphs > Tables, again

March 16th, 2009  |  Published in Data, Statistical Graphics

Over at the Monkey Cage, Lee Sigelman notes a new study from the CDC that tries to figure out how many people and households in each state have no land line and rely entirely on cell phones. Being a good student of Andrew Gelman, my first thought upon clicking the link was: "these tables are horrible, they should be graphs!" My second thought was, "Gelman will probably come along and produce graphs of the data himself". So before that happens, I thought I'd take a stab at summarizing the paper's first couple of tables:

Cell phone only data

Cell phone only data

Click the image to see it full-size.

The intervals aren't classical 95% intervals--they're some kind of fancy estimation from the CDC that you'll have to click the link to find out about. The hollow points/dashed lines are the "modeled" estimates, and the black points/solid lines are the "direct estimates". The points are in order according to the modeled estimates.

The nice thing about displaying this graphically is that you can see how much uncertainty there is on some of these estimates, so you get a better idea of what this graph does and does not tell you. For example, Washington DC is estimated to have the highest percentage of adults in cell-only households, but the confidence intervals reveal that this doesn't really mean anything--the most you can say is that DC is on the high end of cell-only prevalence.

Richard Rorty and the Giant Pool of Status

February 16th, 2009  |  Published in Social Science

By way of OrgTheory, I see that Gideon Lewis-Kraus has a nice little essay on Neil Gross's recent book on Richard Rorty. The piece strikes a number of resonant notes for me: on the terminally wack state of academic sociology, the status hierarchy of the university, and the relationship between intellectuals and public life. But one odd thought I had when reading the piece departs from the following passage:

Bourdieu suggested—often impolitely—that the generative basis for a career in thought was to be found in the lusty drive for the kind of symbolic and cultural "capital," his terms of greatest currency, that would help the thinker, and her field, achieve a higher status. In other words, the academy functions largely as an apparatus for refining and transmitting the cultural codes that serve the perpetuation of privilege. Professors, as "the dominated fraction of the dominant class," are the sentries of the class structure.

Gross's book is an attempt to argue that this account does not entirely apply to Rorty. Or, more precisely, that it does not apply to Rorty's later career, after he had gained tenure and a place of prominence within academic philosophy. Instead, Gross claims, it was an inner devotion to an intellectual self-concept as a "leftist American patriot", rather than a bid for status, which drove Rorty's evolution into an odd sort of postmodern pragmatist.

Lewis-Kraus has a number of insightful things to say about the uses and limitations of this account. But as I considered the matter of Rorty's cultural capital, I was put in mind of something about, well, regular old Capital, the kind that the boys at the hedge funds have been busy vaporizing of late. Last year, the NPR show "This American Life" did a great story about the present economic crisis, called "The Giant Pool of Money". The title refers to the roughly $70 trillion of accumulated capital that, in the early part of this decade, went looking for profitable investment opportunities. The trouble was, there just weren't enough low-risk high-reward opportunities--neither investing in the production of stuff nor in U.S. treasury bonds was going to cut it. So instead, this money started flowing into the mortgate market, which seemed like a low-risk, high-reward investment, until it didn't.

The important thing to notice is that once you have a really, really huge pile of money, it gets more and more difficult to find profitable ways of re-investing that money. This has been pointed out recently by various commentators explaining the root causes of the crisis, and it is a rediscovery of something originally pointed out by Marx.

Anyway, it occured to me that Rorty's intellectual makeover could be thought of in a similar way. By the early 1970's, he had accumulated a large amount of cultural capital by basically playing the game of analytic philosophy according to the rules accepted by its leading figures. But once he had risen to the top of that group, there were bound to be limited returns to a strategy of reinvesting cultural capital into the austere discipline of analytic philosophy, what Lewis-Kraus calls "the If-P-then-Q school of compelling reasons." The most he could have hoped for was to be remembered by philosophy professors and grad students, and not by much of anybody else.

The alternative was to plough his cultural capital into a higher-risk project, but one with potentially greater returns. Namely, to stake his reputation on an attempt to break out of the confines of the philosophy department, to redefine both the place of philosophy and the vocation of the intellectual. In staking out such an iconoclastic path, there is always the danger that one will be doomed to obscurity--recognized by neither the profession you have spurned nor the public you court. But if the gambit pays off, you become precisely what Rorty became: someone read across disciplines and even outside of academia, the sort of person whom sociologists write intellectual biographies of.

Which is not to say that this is the only explanation for Rorty's career, or even the most important one. Lewis-Kraus's own observations about the dead-end trajectory of '60's philosophy and present-day sociology are perhaps more to the point. But living as we do in a society which accumulates fame and status in a small number of hands, it's worth speculating about the consequences of "overaccumulating" that status.

And as is the case with money capital, the over-accumulation of cultural capital can have beneficial as well as deleterious results. Just as the tech bubble of the late 1990's let to an overinvestment in broadband capacity that created the basis for the future growth of the Internet, so does the overaccumulation of status among a few star academics allow some of them to do truly transformative and pathbreaking work, as Rorty did.

Pessimism of the Intellect

October 30th, 2008  |  Published in Politics, Statistical Graphics

My boss is a prominent political scientist and an Obama supporter. This afternoon, he was ribbing me for being a "pox on both your houses" ultra-leftist who only grudgingly acknowledges that electing Obama would be good for the left.

After our meeting had ended, I came up with a perfect encapsulation of my feelings about Obama, which has the added benefit of being an extremely nerdy joke. My point estimate is that it does matter whether Obama wins. But my confidence interval for how much it matters includes zero. In the spirit of Jessica Hagy, I present the argument in graph form:

How I feel about Obama

Art as Art, Anti-Art, Post-Art

June 30th, 2008  |  Published in Art and Literature

The Museum of Modern Art's survey of the work of Olafur Eliasson was fittingly titled Take Your Time. The best of his installations seem unremarkable at first, and only reveal their depth to those viewers who linger and contemplate them beyond the point when the casual museum-goer has walked away. (Whereas the lesser works pique an immediate interest that is quickly sated.) In Wall eclipse, a light shines onto a rotating mirror, casting shadows and deflected light onto the walls. The complete rotation takes several minutes, and only then does its nature become fully apparent: a cube rotates in a virtual space, black on one side and luminous on the other; at a certain moment, the shadow face blots out an entire wall.

In the gallery above Eliasson there is, by happenstance, a small exhibition of a few works by Ad Reinhardt and Mark Rothko. Only five Reinhardt paintings are represented; of these, only two are really memorable. And yet the show confirms Reinhardt's overlooked importance, for these are two of the most important paintings of the twentieth century. They are among the famous ``black paintings'', compositions of nearly featureless canvasses made up of either solid black or rectangles of nearly indistinguishable near-blacks. They are thrilling to see up close. Moreover, they mark the moment which Reinhardt lived through and memorialized and that, in retrospect, appears as a critical turning point in the history of culture: the end of art.

This not to say the end of the aesthetic, or of painting, or of beauty. Rather, it is the end of a specific historical configuration that corresponds to the capitalism of the nineteenth and twentieth century. What has ended is the work of art as an autonomous, detached, and potentially transcendent thing. Art as replacement for God, as beyond politics and everyday life. Eliasson, for example, is not an artist in this sense. His environments are immersive, and yet it is precisely this quality that prevents them from existing as autonomous art objects. He enhances and colors the environment rather than creating something that stands outside of it.

Eliasson's work also fits comfortably within the parameters of contemporary consumer culture. To speak of co-opting or assimilating his work to capital would be redundant: his identity with the needs and prerequisites of commodity production is absolute. The installations are prototypes for the environmental modifications soon to appear in forward thinking cocktail lounges and waterfront condominiums. Works like I only see things when they move, with its kaleidoscope of colors marching across the walls, or Negative quasi brick wall's undulating and glittering interiors, are captivating but ultimately empty. Ignorable background visuals, they become part of the stylish ambience of life in the new professional managerial class. Although perhaps Eliasson's trajectory is better illustrated by another of his projects in New York City: a series of artificial waterfalls, constructed with the blessings of mayor Michael Bloomberg, which will line the East River through the summer of 2008. If Bloomberg is a CEO mayor who has sought to remake the entrepreneurial city into New York, Inc., then Eliasson is just the man to commission for the atrium at corporate headquarters: a spectacle for the masse; a promethean engineering achievement to underscore the organization's power; a pleasant image devoid of subversive or critical content. The roar of the waterfalls cannot mask the sound of real estate capital, valorizing itself.

Faced with the decay of art into commerce, it is tempting to demand something more of Eliasson. But what exactly? Ad Reinhardt's example shows the difficulty of demanding more. He reacted directly to the tendencies that would culminate in an artist like Eliasson, which first burst into view in the 1960's. Then, they were radical and disruptive, advanced with the fervor of those who would break down the stale dogmas of high art. First came the anti-artists: Dada, Fluxus, Situationism. Using parodic forms and recycled cultural fragments, they ridiculed and devalued the heroic pretentions of art, its delusions of autonomy and integrity. Perhaps this is what Reinhardt means when he explains the concept underpinning the black paintings: ``a pure, abstract, non-objective, timeless, spaceless, changeless relationless, disinterested painting -- an object that is self-conscious `no unconsciousness' ideal, transcendent, aware of no thing but art `absolutely no anti-art'.'' He crafts objects that are built to resist the irreverent bricolage of the anti-artists.

In the long run, however, the real threat to art was posed from another side, by the post-art movements that tended to dissolve high art into low, and thereby assimilate it to the logic of commodity production and the culture industry. The Lenin of these revolutionaries, and the key figure in the movement to annihilate art, was Andy Warhol. He announced with characteristic bravado that ``making money is art and working is art and good business is the best art'', and at a stroke he turned the existential anguish of his forbears into a pile of nonsensical false problems, while at the same time re-unifying the imperative of art with the reigning business ethics of capitalist society. No doubt the hedge fund kingpins, surrounded by the abstract expressionist masterpieces hanging in their lobbies, smile serenely to themselves when they contemplate Warhol's prophecy.

Reinhardt's paintings can be seen as an attempt to resist and defy this onrushing order, a last stand in defense of art. Which is not to say that they are anti-capitalist, or political at all in any intelligible way. They are opposed to any subsumption of art beneath a larger social project, whether that of capital or that of the left. For the power of these paintings comes, first of all, from their implacable anonymity, the lack of texture and character that Reinhardt speaks of. They refuse to be anything other than themselves, to have any meaning other than the brute fact of their existence. But more important even than this, the paintings maintain their power because they are essentially resistant to reproduction. They are destroyed by any attempt at a copy: on a postcard or a print, they become meaningless black squares. In their physical presence, however, the blackness takes on depth and richness, demanding the viewer's attention and producing a flickering illusion of shapes and textures. You must go to these paintings, for they cannot come to you. They are singular objects, and cannot be wrenched out of their hiding places and put to use for the polemical or commercial purposes of others.

Reinhardt has this in common with Eliasson: his work, too, demands prolonged engagement and attention before its meaning becomes apparent. But where Eliasson aims only to provoke a pleasant sense of wonder, Reinhardt's black paintings are full of dread and foreboding. Eliasson's Beauty, an indoor rainbow produced by a waterfall in a darkened room, invites introspection and contemplation much as Reinhardt's black rectangles do. But the repose its shimmer invites is one that is shorn of all the disturbing and challenging elements of the black paintings; it turns us into regional vice-presidents, doing a bit of meditation and Buddhist chanting at the end of a long day's work. Reinhardt poses a challenge of an altogether different order, and his paintings unnerve rather than reassure. As objects that can be neither interpreted nor reproduced, they envelop the viewer in a private space without indulging in the illusion that there can be anywhere safe from the relentless march of capital accumulation in the world outside. If the ultimate effect is a sort of nihilist oblivion---if, that is, they ultimately amount to the expression of a kind of living death---then this feels like a depressing but honest settling of accounts with the world in which the paintings exist.

It seems that Reinhardt meant his black paintings to be ramparts, from which to defend what he called ``art as art'', timeless and ``post-historic''. The paintings are not a response or a reaction to anti- and post-art so much as they are an active assault on them, an attempt to dispel their malign force. ``An artist-as-artist / Has always nothing to say, / And he must say this over and over again.'' For ``A fine artist by definition is not a commercial / or industrial or fashion / or applied or useful artist.'' And yet the polemics and pranks of the anti-artists were no alternative; ``A fine artist has nothing to use, / has no need for any meaning, / and would not use himself or his work for anything.'' Only a posture of immoderate refusal, he thought, could overcome the vulgarizing and cheapening forces then being brought to bear on art.

Inasmuch as he produced something that could not be assimilated or re-appropriated by the Warholians and anti-artists, Reinhardt succeeded. And yet the edifice he constructed became, in the end, a tombstone. For Reinhardt could always be ignored. To refuse spectacle, to refuse image, to refuse reproduction: this is, in our times, to remain silent. And he has been silent; his ambitions, if they are remembered at all, are viewed as a curious memento of a time when artists still believed in the world-shaping power of their works. If we let Reinhardt speak in his own words, the ludicrous arrogance of his ambitions only confirms that he bears the message of some social force that was long ago defeated and beaten into submission.

Yet we shut out his voice at the peril of falling into the errors he had already forseen. At MOMA, his paintings stand next to those of Rothko, still celebrated---and who, like Eliasson, was fascinated by light and color, but who still believed in the possibility of transcendent expression in art. After the reproach of Reinhardt's absolute negation of art, Rothko's paintings, for all their beauty, seem quaint and naive. And today, anyone who does ``painting'' in the old way is engaged in a basically fraudulent enterprise, aping a dead style in order to lubricate the circuits of accumulation in a hypertrophied art market. To take refuge in the aesthetic is to prop up an ideology of beauty which, as Fredric Jameson says, can today only be meretricious. And yet a turn to the ``political'' is equally implausible, in the absence of any movement in the deeper layers of society as a whole. Witness, for example, the stale nostalgia politics of the warmed-over ``peace tower'' at the 2006 Whitney biennial---along with the  pathetic intervention of Richard Serra, whose agitprop drawing of an Abu Ghraib prisoner only underscored the art world's isolation from the realities of war and state terror.

All of this leaves the artist with little room to maneuver, and in this context Eliasson's limitations can be viewed more sympathetically. And within those limits, his work is not empty of value. He is, above all, a craftsman who uses light itself as a medium with unequalled facility. The entryway to his exhibition is bathed in a harsh, narrow yellow light; the effect of Room for one colour is to render the entire space monochromatic. The tour de force of the show, however, is 360' room for all colours, a circular room whose blank walls gradually change colors. The piece harnesses the ability of light to shape perceptions and emotions, even as it successfully abstracts light itself, independent of the things it illuminates, as a compositional element. The visitor experiences one mood after another: daybreak; an overcast fall day; the underwater light of an aquarium; the crumbling of sunlight into dusk; the harsh neon of the nightclub. At the auxiliary exhibition at P.S. 1, another piece makes a more focused use of the same technique: The natural light setup uses a light box on the ceiling to immerse the visitor in a sequence of permutations of daylight, each evoking a different emotional state.

In his guise as a phenomenologist of the science of light, Eliasson is perhaps best compared to artists like Albers or Moholy-Nagy, whose abstract and schematic images crossed the line between art and design. Like a typographer or a graphic artist, Eliasson is a technician of everyday life, deconstructing the elements of our surroundings in order to effectively cajole or manipulate an audience. Such skills are adaptable to the advertising agency just as much as the gallery floor. But if he gives no reason to doubt that art as art ceased to exist after Reinhardt, Eliasson's work does suggest that the new art-as-corporate-handmaiden is not without its real pleasures and joys.

At P.S. 1, there is one other juxtaposition that, as with Reinhardt, illuminates Eliasson's position. On a stairway leading up to his galleries, Markus Copper's Kursk occupies a darkened room. Inside, there is barely room to navigate along the walls. The center of the room is taken up with a set of diving suits hung from the ceiling, facing away from the viewer toward a wall. The suits move erratically; lights come on and off from inside their helmets; a hissing sound fills the room; the suits all appear to be looking toward something the viewer cannot see, and periodically there is a loud clanking sound. The effect is disturbing and sometimes terrifying; visitors often exit with distraught expressions. The contrast with Eliasson's lighthearted and non-threatening confections could not be more stark. And yet Copper's work would not be recognizable to Ad Reinhardt as Art, any more than Eliasson's. The purpose of such a brutal display is to provoke unease about the entire museum experience; it reminds us that life is elsewhere, and that it is disturbing.

All of these artists are symptoms of something: the death of the utopian impulse in contemporary culture. Some of this, perhaps, is due to the loss of art's debunking or defamiliarizing function, its mission to lay bare the truth about our cherished national ideologies. Such work is now superfluous for, as Jameson remarks, the project of actively deluding the masses is redundant. ``the mass of people . . . do not themselves have to believe in any hegemonic ideology of the system, but only to be convinced of its permanence". The artists, it seems, are equally convinced. They find themselves caught between the comfortable life of beautifying capitalism, and the unrewarding calling of denouncing one's own career and one's audience, attempting to break the spell of art by repulsing the viewer and forcing them out into the world. Ultimately, neither is a solution to anything. Art, as also politics, must come to recognize ``a better world in birth'', to identify and evangelize for those elements of our present life that point toward some entirely different, better future. Failing that, we are left with Eliasson's inconsequential lights, and Reinhardt's terrible darkness.