Thursday, December 18, 2008

A Bit of News

Our esteemed colleague, the pravdakid, has recently undergone a change of academic status. Congratulations, David.

Saturday, December 13, 2008

Something I just have to get off my chest

Went to see Synecdoche, New York this past week. I was pretty stoked, given what I'd read about the movie and the reputations of almost everyone involved. Which made the narcissistic, jejune piece of crap that I sat through for two hours all the more infuriating.
If I ever see Charlie Kaufman in person I'm going to punch him in the nose.

Monday, December 8, 2008

Football and Chabon

This next comment is for loyal PF reader Al C., who has been following the fortunes of the University of Tulsa football team this fall. Al will no doubt be wondering how it was that a highly favored Hurricane squad lost, at home, to East Carolina this weekend.
Here's the answer. They just weren't all that good. Many non-BCS teams are undeservedly ignored by the national media and other coaches: Utah and Boise State would be examples here. Tulsa was deservedly ignored: their defense was not great and their offense built up the big numbers by doing well against very weak teams. I say this with no great sadness in my heart, since I am not a big fan of Div-1 sports, and certainly not at a school as small as Tulsa (we have only about 3,000 undergraduates, 4,000 students total). I suspect that there is probably some non-crazy explanation for why we have a Div-1 football team, but for the life of me I can't figure out what that would be.
At any rate, I was not at the game. I was at the public library Saturday morning, listening to Michael Chabon, who received our city's big literary prize, The Peggy Helmerich Award. He was charming, if a bit disorganized. I had read one of his Details columns a few days before at the gym. It was about an old writing teacher who had passed away, and it was a lot more compelling than the talk, by and large. The best line from the library discussion was when he said that his relationship to plot was analogous to Bob Dylan's relationship to singing.

Friday, November 14, 2008

Optimistic Quote for the Day

“Reason is at its best related to generosity, to being able to acknowledge the truth or justice of another’s claim even when it cuts against the grain of one’s own interests and desires. To be reasonable in this sense involves not some desiccated calculation but courage, realism, justice, humility, and largess of spirit; there is certainly nothing clinically disinterested in this.”

Terry Eagleton, Illusions of Postmodernism, Blackwell, 1996, 123.

Wednesday, November 5, 2008

Quote for the Day

Mongol General: "What is best in life?"

Conan the Barbarian (aka the current Republican governor of California): "To crush your enemies, to see them driven before you, and to hear the lamentations of their women."

Monday, October 20, 2008

When We Fail to Manage Risk

Last Thursday I went to Huntsman Hall to see a talk by U.S. Department of Homeland Security Secretary Michael Chertoff, titled “When We Fail to Manage Risk.” The Dow had fallen 700 points the previous day. The housing bubble and the credit crunch and the subprime meltdown and the extra digit on the Times Square deficit billboard were all over the news. I was curious to gauge the mood of the business community so I went to the talk in order to eavesdrop on the audience. For the presentation itself, I had low expectations. A few weeks prior, the Department of Homeland Security had run a campus recruiting event – the word on the street was, they needed statisticians and analysts to deal with all that surveillance data they’d been gathering. So I thought this talk would be part of a recruitment drive for talented students whose private sector prospects had plunged with the markets.

The eavesdropping was dull, since most people were engaged with their phones and laptops. But the talk was more interesting than I’d expected. Chertoff drew on natural disaster scenarios to illustrate his points, but the economy was the obvious subtext. He started out by solemnly informing us that the core responsibility for risk management lies with the private sector. After all, he said, the right to balance risk and reward – to take chances and reap the consequences - is “the definition of freedom.” After the shout-out to the free-market folks, Chertoff spent the rest of his talk explaining why the market can’t do it alone. In his words, “we have difficulties because of the way we are wired.” Because humans are bad at “estimating time-horizons,” we choose immediate gratification over long-term safety. Because we are harbor a blindness to “collateral and cascading external costs,” we set up conditions whereby our actions come back to haunt us. Because we “resist transparency,” and hate to explain ourselves, we foster faithlessness. Unchecked, markets (us, only bigger) are impulsive, selfish, and secretive. That’s why we need the Department of Homeland Security to protect us from ourselves. Apparently, strong yet realistic regulation can check dangerous human tendencies without smothering human initiative. In his diagnosis of our current financial problems, Chertoff neatly de-politicized greed by framing it as a hard-wired flaw that could be diagnosed and compensated for through sound managerial techniques. From a Foucauldian perspective, the talk practically analyzed itself.

From a Hornerian perspective, I thought the talk was a little perverse. To illustrate why people should be required to build elevated houses on flood plains, Chertoff described touring a neighborhood after a bad hurricane. Whole rows of houses were “crushed, as if a giant had stepped on them,” but occasionally he’d see a house that was practically untouched, because it had been built on stilts. In Chertoff’s words, these homeowners could move back into their intact houses “like it never happened.” OK, yes, that’s an overstatement -- if your neighborhood is a pile of rubble, even if you still have a house your situation is not good. My point is not that Chertoff is being disingenuous, but rather, that his focus on individual risks and benefits constrained his ability to promote his larger cause. The anecdote exemplified a problem for public discourse in general: when we talk about public responsibility for the public good, we are rhetorically hamstrung by the neoliberal frame. A focus on individual outcomes impoverishes our ability to conceive of the common good on its own terms, without the baggage of zero-sum ideology (i.e., one guy wins and one guy loses) which (IMO) is the constant subtext of the individualist frame. Chertoff made this point: if the builder and the buyer accept the costs of elevating a house, then the owner will survive a hurricane. But there’s a false equivalence afoot. I admit, I’m a knee-jerk communitarian, so people who take sides in the eternal struggle between big bad gummint and the feisty lone individual may not agree with me, but maybe in order to really conceive of the community protecting the community, we need to talk about it and think about it through a different frame.

I read The Grapes of Wrath last August on vacation with my family (I got teased for picking such a gloomy beach read.) It’s a remarkable book, and now that everyone’s talking about the Great Depression I think of it often. Anyone interested in sustainable agriculture will find it eerily relevant. Steinbeck has a lot to say about community and the common good. Since the economic slide appears to be snowballing, I’ve found myself pondering how things will go –will we start using powdered dry milk and Tang again? Will multigenerational households become typical? Will we go all survivalist and suspicious and start hunkering down with our guns and canned goods? On the latter point, I found this very cogent argument against the impulse to hoard and defend. To survive a financial/social/ecological catastrophe, Charles Hugh Smith wrote,

…the best protection isn't owning 30 guns; it's having 30 people who care about
you. Since those 30 have other people who care about them, you actually have 300
people who are looking out for each other, including you. The second best
protection isn't a big stash of stuff others want to steal; it's sharing what
you have and owning little of value. That's being flexible, and common, the very
opposite of creating a big fat highly visible, high-value target and trying to
defend it yourself in a remote setting.

Read the whole essay: Smith is not a pie-in-the-sky hippie dreamer. In his view, people are sinful, some more than others, and bad things happen when we fail to manage risk. However, he and Chertoff have very different ideas about how to survive a catastrophe. In Smith’s pragmatic view of risk management, he invokes a moral economy / gift economy fueled by voluntary relationships, not a market economy running on profit and loss. Sure, there’s a false equivalence here too: Smith describes a smallish community on the edge of a wilderness, and Chertoff is working on a national scale. Still, Smith makes me feel a lot more capable and optimistic about surviving the years ahead. I’m “on the job market” – I sent out a bunch of job applications, and today I got an email notice that one job search has been suspended because of “the current economic crisis affecting American higher education (and beyond).” I hope this does not signal the start of a trend, but if it does, I’ll work on my home-brewing skills and follow Smith’s advice on risk management.

Note: After posting this I read a Chris Hedges essay on TruthDig that quoted Canadian philosopher John Ralston Saul on elites' obliviousness to a concept of the commons. He puts it nicely:

“Their inability to see the human as anything more than interest driven made it
impossible for them to imagine an actively organized pool of disinterest called
the public good."

Tuesday, October 14, 2008

Why You Should Vote, etc. (Part 2)



In the last post, I made an argument about why it was important that everyone vote, even so-called “bad” voters. I picked as my example a group of farmers from Appalachia. Some readers might accuse me of playing to stereotype there. Well, I was playing to stereotype, and consciously so, and that’s part of my whole problem with Brennan’s argument.

In his Bloggingheads discussion with Will Wilkinson, Brennan is pretty vague about what constitutes “good voting.” It evidently includes being well informed, using reason rather than emotion in making one’s choice, and carefully considering both sides of an issue before deciding. Also, people who question their own motives are probably good voters. Of course those criteria, self-applied, would include pretty much every American who has voted in any election since the birth of the republic. As Brennan himself concedes, a friend of his—a person who Brennan assures us is NOT a good voter—saw himself reflected in the description of the good voter: consequently, this friend was pretty enthusiastic about Brennan’s argument. Brennan, unsurprisingly, also includes himself in the category of people who are good voters, as does Wilkinson.

So, who isn’t in that category? It’s pretty big, according to Brennan, who estimates only about 30 to 40 per cent of the population are good voters (Wilkinson says that’s high). Although they don’t directly discuss demographics, it’s pretty clear from their indirect comments (at least it’s clear to me) that many if not most of the bad voters these guys have in mind are: a) uneducated, and b) poor. That’s okay, though, because Brennan figures that he’ll make better decisions for these people than they would for themselves. Even Wilkinson finds that notion breathtakingly presumptuous: to which Brennan’s response is more or less to give a gimcrack grin and shrug his shoulders. Yeah, what’re we gonna do? Someone has to look after these rubes.

The problem with this kind of elitism is not that it underestimates the wisdom of the people, who are every bit as narrow-minded and ignorant as it assumes. The problem is that it overestimates the wisdom of elites. And it seems to me that Brennan’s argument exaggerates the intelligence especially of a particular segment of the American elite: the professoriate. I say this not to disparage that class, of which I am a proud member. We are no worse than any group, all things considered. We are also no better. We have a role to play in a democratic life and public debate, and when we ignore that role, democracy suffers. But people like Brennan strike me as being wondrously unaware of how restricted their lives are, how narrow their range of daily experience (and no, I don’t care if he was raised by a single mom on food stamps. He’s an Ivy League professor. I grew up amidst grain fields in Southern Alberta. That’s not where I live now). The ways that we make choices in our lives are not appropriate for everyone. Again, this is the potential genius of democracy, if it works right: different kinds of people arriving at their decisions in different ways.

In the end, an argument like Brennan’s does what all ideology does. It takes a specific perspective—which may be perfectly fine on its own terms—and then universalizes that perspective, so that it becomes the right and natural one for all people. Such behavior is inevitably, and profoundly, undemocratic.

Thursday, October 9, 2008

Why You Should Vote in the Upcoming Election (even if you're a dummy), part 1

A Brown University professor named Jason Brennan has recently come out with an argument opposed to the popular catechism that everyone has a kind of moral duty to vote. Brennan’s position, on the contrary, is that some people have a moral duty not to vote. Although I doubt it will have much of an effect, I want to put forward my own two cents on this issue. For one thing, I think that idea is just wrong, and for another, I think that it could eventually have pernicious consequences. (Since Brennan’s paper on this subject is not published yet, I am not going to quote directly from it, but instead draw on a conversation that he had about the idea with Will Wilkinson on Bloggingheads TV (if you go to the Bloggingheads video, it has a link to a draft of the paper.))

Brennan’s argument goes something like this. As citizens, we have a moral duty to do what is best for our country, if not in the positive sense of doing good (ie., picking up trash on the highway, working at a soup kitchen), then at least in the negative sense of not doing harm. In the case of voting, however, many of us often do harm to the common good, because we don’t understand the issues very well, and we don’t reason very well. Because of that we favor bad policies and vote for politicians who enact those bad policies (unsurprisingly, Brennan says his idea was sparked by Bryan Caplan’s book, The Myth of the Rational Voter.) Therefore, those of us who lack knowledge and or proper reasoning skills have a moral obligation not to vote.

In the discussion with Wilkinson, Brennan uses two analogies to help illustrate the point. First analogy: let’s say a group of six people decide to go out for dinner. One person in the group (me, let’s say) is from out of town. While I may have a right to put forward my choice, it wouldn’t be a very good idea for me to do so. I should rather defer to the other five people, who live in that city and know the restaurant scene. By refusing to exercise my right, everyone (including me) ends up better off. Second analogy: we don’t think that everyone has the right to be a surgeon, or an airline pilot. Only the people with the requisite skills should be doing the sorts of things that surgeons and airline pilots do. Why then do we assume that everyone should be picking the President (this is a pretty old argument, BTW. It goes back at least as far as Plato’s Apology.)

I think that Brennan has a number of problems here but the most important is that he uses the wrong model to understand what elections are and what they do. Elections are not simply the accumulated decisions of a set number of individuals (call them citizens), all striving to make the most rational choice. That model of voting has actually been in trouble for a long time, long before Caplan decided to write his book. One of its biggest problems is that it can’t explain why it makes rational sense for any individual to vote at all, since a single vote has almost no chance of making a difference to the end result. Elections are better understood as choices that a people make collectively. They serve as a feedback loop. As long as the situation is (more or less) okay, the status quo obtains. When things get too bad, the population bands together to throw the bums out.

The thing is, if this system is going to work for the benefit of all its members, and not just some of them, it needs input from all sectors of society. A basic assumption of democracy is that the best people to decide about whether or not a change is needed, seen from the perspective of a certain geographic area, or economic class, or any other social group, are members of that class itself. Poor farmers in Appalachia may not know what the Supreme Court does or who sits on it, and they may not know who their congressman is. They may make fundamental mistakes in economic reasoning. But what they do know, very well, is what daily life is like as a poor farmer in Appalachia. They are the only ones, in fact, who have that information. If the system is going to function in a proper—hell, let’s use a real word—in a just manner, then it needs to get that information from them. Remember too, that the only way the system ever pays attention to their information is when they vote. Policy papers from well-meaning social workers don’t cut it, because ignoring such claims has no real-world consequences for politicians.

Certainly, this feedback is imperfect, for the kinds of reasons that Brennan and Caplan and others point out. Voters are ill-informed, prejudiced, and cannot give convincing reasons for why they do what they do. And they do sometimes make bad decisions, as a group. American voters may be on the brink of making one of the most disastrous decisions in the history of the republic, IMO (though recent polling is reassuring on that score). But to paraphrase E.B. White, the leap of faith required for democracy (and it is very much a leap of faith, although I think in the end a justified one) is to believe that more than half the people are right more than half the time. While an individual may persist in choosing against his or her own interests, groups over time tend not to (see James Surowiecki’s The Wisdom of Crowds). Moreover, the cure implied by Brennan’s argument is even worse than the disease it purports to treat, since it would have a group of self-selected “good voters” deciding what is best both for themselves and for everyone else. This never ends up well, for reasons I will explain in my next post.

Thursday, September 25, 2008

The source of modern resentment

(A blog entry that has no relevance at all to pigs, lipstick, hockey moms, bailouts, suspended campaigns, mortgages, Presidential debates, or the Chicago Cubs)

The present era is so proud that it has produced a phenomenon which I imagine to be unprecedented: the present’s resentment of the past, resentment because the past had the audacity to happen without us being there, without our cautious opinion and our hesitant consent, and even worse, without gaining any advantage from it. Most extraordinary of all this resentment has nothing to do, apparently, with feelings of envy for past splendours that vanished without including us, or feelings of distaste for an excellence of which we were aware, but to which we did not contribute, one that we missed and failed to experience, that scorned us and which we did not ourselves witness, because the arrogance of our times has reached such proportions that it cannot admit the idea, not even the shadow or mist or breath of an idea, that things were better before. No, it’s just pure resentment for anything that presumed to happen beyond our boundaries and owed no debt to us, for anything that is over and has, therefore, escaped us.

Javier MarĂ­as, Your Face Tomorrow: Fever and Fear

(trans. by Margaret Jull Costa)

Monday, September 15, 2008

David Foster Wallace

David Foster Wallace died last Friday by suicide. In the late nineties, after Infinite Jest came out, a friend of mine was living with a producer at a bare-bones cable access talk show in Southern California. Once she showed me an interview with Wallace that her boyfriend had taped. Wallace was nervous, sweating, a little paranoid, worrying on-camera that people would actually see the interview, fretting that he would come off wrong, and so forth. If you’ve had conversations with extremely self-conscious, hyper-intelligent, depressed people you would recognize the style – arguing with oneself while second- and third-guessing the meanings and intentions of the interlocutor. Defensive, and very, very uncomfortable, but at the same time self-aggrandizing and annoyed. Clearly the man was not having a good day. He was pretty famous at this point among those of us who were searching for a generational spokesperson other than Douglas Coupland. My friend and I laughed and felt a little sorry for him. It was pathetic, but endearing. He was so talented. After seeing the tape, I also thought he was lucky, because the crippling self-doubt on display in the interview hadn’t prevented him from doing it. This was about ten years ago. In more recent footage of Wallace (on youtube) he’s soft-spoken, but not visibly terrified. Maybe he perfected a sort of persona for the purpose of appearing in public – and he taught creative writing so his students probably trained him to hold his cards a little closer to his chest. (Some students react to a nervous professor like sharks when there’s blood in the water. I can’t imagine he didn’t notice it right away.) Also legal pharmaceuticals for calming performance nerves are a lot easier to get today. In short, he probably gave that one inconsequential public-access interview on a particularly bad day. He probably had them every now and then, until he had a really bad day last Friday.

Today, a lot of publications published online eulogies. They focused on the brilliance of his work and his impact on literature, but refrained from speculating on the cause of death. When I look at news on the Internet I almost always read the comments, and sometimes, if it’s a piece on a topic I already know a lot about, the comments are all I read. The New York Times forums are my favorites. There are a few trolls here and there, but readers generally don’t overreact, and the editors step in if things turn ugly. The Times comments section after Michiko Kakutani’s discussion of Wallace's work shows a particular style of group grieving – people who knew him personally or through his work shared memories, and a few weighed in on depression and suicide (a sin or a sickness? Discuss.) Some posts cited mentions of depression in Wallace’s writing - for example, the bit about the urge to jump overboard in his Harper’s essay on cruise lines, or his essay on depressed people. Sarah Palin was mentioned several times. I’m not a genius, but I am a person who values thoughtfulness, and I have to agree with the readers who drew parallels between what she symbolizes and how people like us (here, I’m not talking about Wallace – I’m talking about snooty Eastern urban types – folks who read the New York Times arts section online) feel when she mocks “big fat resumes” and her supporters consider book-learnin’ a political liability. In the reader’s forum, one reader speculated on Wallace’s thoughts about “this country's response to Sarah Palin, the blatant reaffirmation of the strident anti-intellectualism that put us on the downward slope with our foreign relations, our economy and our future as a place where it can even be possible to be a reflective human being and be appreciated as such.” Another reader replied, “shame on you for the Palin jokes. Think of how his family feels.”

It’s not a joke. It might be a partisan attempt to hijack sorrow in order to promote a political goal, which is tasteless (and a well-worn political strategy), but it’s also a statement of fact. When that feeling of worthlessness descends, reading today’s political coverage doesn’t help. Is the candidacy of Sarah Palin sufficient cause to do myself in? Of course not. Is her ascendance depressing to people who think there’s more to foreign relations than guns and bluster? Well, yes. When I visited my nephew’s grammar school I was struck by all the “anti-bully” propaganda plastered all over the school. It shows up in my own child’s urban school materials too, but not half as much as it does in the Republican stronghold my nephews live in. Yet the swaggering, hard-nosed bully is what the Palin / McCain ticket glorifies (This is the sort of paradox that Harpers-type writers love to point up, and if I were half as talented as David Foster Wallace, I would have written an entire essay by now on that topic.) The Third Reich rose on a wave of shared resentment over the humiliations of Versailles and the economic traumas of the early thirties. People supported Hitler because he defended them against those who would denigrate German pride. They didn’t necessarily recognize him as a bully – he was their champion. The grievances expressed by supporters of Palin are not fake – they're as real as the hurt felt by middle-class Germans when their savings became worthless and Goebbels told them to blame the infiltration of Jews, communists, and intellectuals. Looking at Palin through that lens, it feels dangerous and outmoded to be a thinking person at this moment. Even if Wallace had no thoughts at all on the current situation as he took steps toward his own death, those of us who are trying to make sense of what causes such a smart and successful person to kill himself can’t help but consider it. On the other hand, maybe we just think too much.

Monday, August 11, 2008

STOP TELLING ME WHAT EVERYBODY'S EATING: A RANT ABOUT ALMOST NOTHING.

There is a limit to what we can, within reason, be expected to endure, and sometimes even the smallest thing can tick me off. At the risk of demonstrating myself to be a short-tempered coot, I demand that the New York Times writers stop talking about what they and their sources are eating.

What do I mean? Good question. My response: Frequently, in 'features' sections of the NYT, like the New York Times Magazine, the stories involve longer interviews with sources who play major roles in stories. As if to demonstrate that these are real people, doing real things, with whom the journalist ACTUALLY interacted, and also so as to fill space (part of what Kevin Barnhurst has called "The New Long Journalism"), we find journalists sharing all kinds of information concerning what their sources are eating, and where

Take the recent article about internet trolls, from The New York Times Magazine, August 3, where we are told:
"We ate muffins at Terra Bite, a coffee shop founded by a Google employee where customers pay whatever price they feel like."
"We walked on, to Starbucks. At the next table, middle-schoolers with punk-rock haircuts feasted noisily on energy drinks and whipped cream. Fortuny sipped a white-chocolate mocha."
"Fortuny calls himself “a normal person who does insane things on the Internet,” and the scene at dinner later on the first day we spent together was exceedingly normal, with Fortuny, his roommate Charles and his longtime friend Zach trading stories at a sushi restaurant nearby over sake and happy-hour gyoza."

This is an informative story about internet trolls, and I write not to discredit the journalist (Matthias Schwartz), but to say two things:

a) The fear was, for a long time, that stories in news outlets would get shorter and shorter, as a result of competition with television. The web (though not all other parts of the internet) has given journalism a much more expansive shell for journalism, and stories are, in some cases, free to 'breathe', even to lounge around, have brunch, take a stroll through a leafy neighborhood, get lost in the basement of a bookstore, and eventually find their way home. In elite US journalism, we're seeing (and this is indirectly related to web journalism) long stories.

b) I think the bit where journalists tell us what they're eating, or what their sources are eating is irritating, and will be mocked in future times as a telltale sign of elite journalism of our current period. It's an obvious way to identify one's journalism as made by, and intended for, the upper middle-class (perhaps as 'objectivity' was a hundred years ago). And it's too clever by half.

Before I become Andy Rooney, I'm going to take my leave...

Tuesday, August 5, 2008

Don't Hate Him Because He's Intelligent

There are all sorts of reasons that I could give as to why I think John Derbyshire has surpassed Gore Vidal and Ron Rosenbaum as the biggest asshole now working in American journalism, but I would suggest that this recent piece of his in the National Review serve as Exhibit A. At the top of the article, he cites a recent essay by William Deresiewicz in which Deresiewicz recounts having had some trouble talking to his plumber. This inability to communicate across class levels becomes, for Derbyshire, an anecdote supporting the idea of a natural elite, based mostly, he seems to think, on intelligence. So you see, when the hoi polloi start moaning about elitism, it's really just a complaint that some folks are smarter than them.

First of all, any time you start talking about "inherited" intelligence you beg all kinds of pretty basic questions: including, what intelligence really is, how we know we have measured it, how we could ever control for various environmental influences with any degree of influence to make any sort of claim about innate abilities, and whether it is the case that intellectual skills are represented by a single trait, or (more likely by far) that some people have mental capabilities that make them good for dealing with some sorts of tasks, and others have capabilities to deal with other sorts of tasks.

But the really rich irony here is that Derbyshire has completely misunderstood (or mis-represented) the essay that he quotes from. William Deresiewicz isn't arguing that he can't talk to the plumber because he, Deresiewicz, is just so damned smart. He's arguing that he can't talk to him because he is not competent to do so. There's no natural hierarchy at play here; there's a failure of the educational system. Which any reasonably intelligent reader might have guessed from the title of the essay: "The Disadvantages of an Elite Education."

One of the ways that Ivy League schools deform their students, Deresiewicz argues, is by pampering them both intellectually and emotionally, so that they get an undeserved sense of superiority:

"There are due dates and attendance requirements at places like Yale, but no one takes them very seriously. Extensions are available for the asking; threats to deduct credit for missed classes are rarely, if ever, carried out. In other words, students at places like Yale get an endless string of second chances...Elite schools nurture excellence, but they also nurture what a former Yale graduate student I know calls 'entitled mediocrity.'"
In other words, John Derbyshire, unafraid to speak the Truth that all us cowed liberals cannot accept--that there are truly superior people in the world, who are just more intellectually gifted than all the rest--isn't even bright enough to figure out the point of the stuff that he quotes. He has taken almost the exact opposite meaning from Deresiewicz's essay than the one the author obviously intended it to have. And then he's broadcast that perverted interpretation to his readers, most of whom probably won't probably bother to read the original. So now, an insightful, provocative commentary on the relationship between education and class in modern America gets turned into some lame defense of elitism, thanks to Derbyshire. What a tool.

This isn't really media commentary, but I had to get it off my chest.

Wednesday, July 30, 2008

Media and bodies

The issue of visibility is, as Dave notes, important for thinking about new media in general and especially thinking about it historically. Bloggers have helped make visible certain moments or kinds of information (two prominent examples: Presidential sexual follies; racist remarks made by public figures) that may not have become part of the public discussion in an earlier era. In doing so, they have also made visible the ways in which mainstream media had always decided on what was or was not newsworthy, allowing for a more public critique of news institutions as well as politicians.

At the same time (as Fernando points out) we need to realize that media of any kind both open up and foreclose certain opportunities, encourage certain ways of acting and discourage others, bring some kinds of information to the fore and hide other kinds. One of the things that the Internet hides is the physical specificity of the bodies that use it: their visibility. Sherry Turkle has famously celebrated that aspect of Internet communication. By removing physical presence from an interaction, people were allowed to be whoever they wanted to be. If you a middle-aged male accountant from Wichita, you could pretend to be a surfer, or a biker, or a Buddhist monk, or a woman, or a space alien. No-one would be the wiser: a kind of postmodernist play of identity became a very real possibility. But this feature also meant that it was easy to forget that most of the people using the Internet in the 1990s (the VAST, VAST majority) were white, youngish, middle-class American males (which might help explain why, for example, the dominant political ethos was essentially libertarian). When we look at how the Net was organized, the ways it was used, the kind of discourse that built up around it, we need to keep in mind what sorts of bodies were in charge, and maybe also look at how they used something like a notion of visibility (or related terms like “openness”), to both publicly present themselves and to strategically hide certain elements of their lives.

Sunday, July 27, 2008

HEADFIRST INTO A DILEMMA: HISTORIOGRAPHY, CONTINUED

In a recent post, I argued that too many histories of the media focus too much on the discourse surrounding the media, and neglect the material qualities of the media themselves. I maintain this position, but in the last few weeks, I've been thinking about how some of the best historical work in media studies has succeeded largely because they de-center the object of analysis. In other words, much as works of history of communication technology that focus almost exclusively on, say, newspaper reports about a certain medium (many inspired by Carolyn Marvin's When Old Technologies Were New, a truly superior work) fail to deliver much because of their tendency to recapitulate tired ideas about utopian and dystopian expectations (Marvin, and Carey before her, always did better than this), histories that focus almost exclusively on the technology as a thing in and of itself reify the 'set-aside-ness' of the communication technology in question, and this leaves us with some pretty weak historical work, too.

A colleague recently suggested to me that Daniel Walker Howe's What Hath God Wrought--a history of the U.S. from 1815 to 1848--did a better job with the history of the telegraph than many other books that took a more telegraph-centric look at things. The point: Howe (a bona fide historian) does better talking about the telegraph because he's not talking about the telegraph itself, or even about the discursive domain surrounding the telegraph. It's a superior piece of media history because it's not trying to be the history of a medium, which allows it to be a multivariate and broad exploration of all kinds of things happening in the U.S. as the electrical telegraph was coming into being (keeping in mind that the original 'telegraph' wasn't electrical, or American). So, all those issues in economics, culture, military history, race, gender, class, politics, power, regionalism, and literature get pulled into the analysis.

So, maybe it works like this:
--histories about technologies themselves are often too limited in scope
--histories about the discourse around the technologies have become a kind of one-note reminder of the constructedness of all things (an idea that is important, but not really sufficient in all cases to explore or exhaust a particular area of study)
--histories that go broad and don't focus on a communication technology are the best

To write good histories of the media, maybe we need to stop looking at the media.

Tuesday, July 22, 2008

The Internet: Technological Revolution or Business as Usual?

The other day while I was waging a pitched battle with my five year old son over something that seemed crucial at the time but is totally unmemorable now, my eight year old daughter came careening up with a copy of Entertainment Weekly, the issue with the “Comic-Con ’08 Preview.” She held up the page with the panels from Ender’s Game and asked in a tremulous voice, “What is happening?” Since I’m a certified media studies person sensitive to issues of kids and media, of course I turned on a dime, forgot about my fight with Wyatt, and settled in on the sofa for a nice snuggly media literacy session. Right? Wrong. In my frantic state, I flung my hands in the air (like I just don’t care) and said “I don’t know! I can’t tell you! I just don’t know!” And then, I barked, “And that’s a grown up magazine anyway, you shouldn’t be reading it!” End scene.

The next day she and I were in the Video Library rental place. I was trying to find a copy of Eastern Promises and she was lurking around in the comics section. I’m not that up on recent graphic novels but I’ve seen a few in the past that freaked me out. So I kept throwing glances over there and worrying that she would come across something horrible, but at the same time I was feeling ashamed of my censorship cop behavior and wanted to give her some freedom. She’s a sensitive person – she came home from Wall-E in tears – so part of why I reacted so inappropriately to the Ender’s Game question came of frustration at the futility of trying to protect her from weirdness. But at the same time she heads for the comics like a moth to the flame and has to be pried loose, so why should I hold her back from her appreciation of art? This was my internal struggle, but luckily she stayed near the kids’ shelf and all was well.

On our way home, I tried to do the right thing and address what happened with the Entertainment Weekly magazine. I told her I was sorry I was so abrupt and dismissive, and then I tried to explain what was happening in the panels – basically a boy was having a microchip removed from the back of his neck, and part of the whole alien mythology (myths are stories we tell that express our hopes and fears) is that people get abducted by aliens and get chips in their necks, so the aliens could monitor us, kind of like what we do to migratory animals, and this story looked like a play on that, and blah blah blah … the usual kind of over-explanation that makes her glaze over … finally she interrupts me and asks, but why does he seem so CALM about it?

I guess, because of Novocaine. What’s novocaine? It’s a drug doctors use to numb your skin so surgery doesn’t hurt. Oh. And that was the end of the conversation. Apparently she wasn’t too worried about the alien chips aspect of the thing.

So what’s my point in posting this evidence of my poor parenting skills? Just this: I think another reason why I tack back and forth between “The impact of the Internet is immeasurable!” and “Eh, there’s nothing new under the sun,” is because as far as the mundane details of my life are concerned, hardly anything troublesome comes of the Internet. It’s incredibly easy to monitor. We rigged it so she can’t go anywhere we haven’t already vetted. On the other hand, just about everything I have trouble handling, in terms of assessing its “effects” on my child, comes from print and television, and it’s usually stuff that I brought into the house! I have trouble handling the idea that my darling sunshiny innocent daughter will encounter a world of perversion and darkness. Putting her on the PBSKids website is probably one of the “safest” things I can do with her – it’s a better electronic babysitter than the tube ever was. It’s like a padded-wall playpen in the middle of a madhouse, but it's probably too young for her now and I have to let her grow up. This is a question of boundaries and barriers, and what audience I’m locating my kids in. Also, regarding Fernando's observations about Radway and Callejo: I'm not "using" media properly, if that means taking advantage of its complexities and using it to the utmost (and taking advantage of the teaching moments it provides). I'm far more wary of "real" than virtual space when it comes to the absolutely most important things in my life.

Monday, July 21, 2008

All Over the Place

I have remained blog-silent for so long (actually, forever) that now I feel I’d like to say a lot, and I am afraid I am going to be “all over the place.” I’ll try to get to the point, and I will restrict my comments to the recent exchange between Mark and Dave (below)...

When I first read Mark’s post on the democratic potential of the Internet, it made me think not so much about whether the Internet is or not a revolutionary democratic medium, but about the frequency and use of this type of discussion (perhaps Dave would call this a meta-level thought). No doubt, that thought was prompted by the fact that I was also reading at that time two texts that I use in class. One was the concluding chapter to Janice Radway’s “Reading the Romance.” The other, a portion of Javier Callejo’s “La Audiencia Activa” (probably the best piece of audience research ever produced in Spain).

Radway devotes a significant part of her conclusions to discussing whether or not reading romance novels has any practical utility for improving the social and family situation of women. Callejo, in turn, discusses how participants in the focus groups he conducted often accused other members of the family (usually those in less “powerful” positions) of being addicted to television (constantly watching useless programs, not doing anything worthwhile with their time, etc). Radway is speaking from what seems by all accounts a genuine concern for the well-being of women, while Callejo’s subjects seem to be using television to play power games within the family. However, I feel that there is a line of continuity between Radway’s discussion and Callejo’s subjects’ comments. They are all talking about how we do not use the media properly or how we do not extract all the potential of those media to change our personal, family or social situation. They are all judging “others” in terms of their media use. I think that the issue of the democratic potential of the Internet (note that “democratic” is equivalent to “good”) belongs to the same kind of discourse. And I have to confess that I feel uncomfortable with it. The same way I don’t like it when people tell me some media is bad for me, I don’t like it when people say “it’s great, but people don’t use it properly or enough”.

Technology opens up possibilities (or closes them off). And the fact that those possibilities are allowed is precisely what leads to the discussions and debates mentioned above. If you only have a land line and you do not answer a call, the caller will probably assume that you are out. But if you have a cell phone and you don’t answer a call, the caller will probably feel you are not a good cell user (after all, cells only exist so you can answer calls at any time from anywhere). It is the possibilities that the technology opens that allows others to criticize, evaluate or ponder your behavior: are you a good reader, a good TV viewer, a good Wikipedia contributor, a good citizen? (I have to confess that ever since this blog was started I have felt the need to live up to the possibilities it opened to me, and I have been somewhat anxious about not living up to those possibilities… I am starting to feel more relaxed now).

And, when the technology increases our possibilities to do things, can we say that something new is happening or is it just the same old? Well, I think the answer is rather arbitrary. If we define and name a certain animal in a certain way, and then we find a specimen that matches the description in every respect but one, we have two options: make our definition more complex to account for the observed variation or use a new name to refer to this specimen which is only slightly different. In my opinion, the name is not that relevant. What really matters is that we carefully study these animals and their behavior.

And this leads me to Dave’s latest post (which, by the way, reminded me of Dominique Wolton’s “Eloge du Grand Public”). I think what Dave is describing is a more complex world. While the audience is fragmenting, there are also certain events (contents, products?) able to attract unprecedentedly large audiences. Both things (fragmentation and agglutination) are taking place at the same time. And this is so because the technology is opening up possibilities and some people (not all, there are still too many lazy bums!) are taking advantage of some (not all, that would be impossible for anyone) possibilities. And that is enough to make our world much more complex and blur previously clear distinctions.

I don’t know if I made sense, but I feel better now :)

Wednesday, July 16, 2008

Obligatory New Yorker post

Feels like it’s already old news, with the backlash to the backlash going full strength, but here’s my two cents’ worth anyhow.

The unspoken heart of the debate is that lots of people are simply too stupid to read cartoons. Specifically, New Yorker cartoons. Which are, let’s face it, often kind of hard to figure out: Seinfeld managed a whole episode on this. You need to do some work on them. You often need a not inconsiderable amount of social capital to figure out what they’re alluding to. This is part of their appeal. At least for me. I kind of like patting myself on the back when I finally get the joke, and if I were completely honest, I’d have to say that I also enjoy the vague feeling of superiority I get knowing that some folks just wouldn’t get it, if they were of a mind to read the damned thing (which many of them are not).

This is relatively mild snobbery, all things considered, made even milder by the sorts of subjects that New Yorker cartoons often address: trophy wives, jokes made at the expense of cultural stereotypes about cowboys and science fiction movies, etc. But this year, attitudes toward the often silly controversies of Presidential-year politics, which tend to be standard fair for satirists, have changed because of the perceived stakes. Don’t you understand? The response to accusations of humourlessness seems to be, There are whole wide swathes of idiocy growing out in the Heartland right now! They don’t believe in evolution! They elected a cretin to fill the position of most powerful human being on the planet! Then they re-elected him!! We just can’t trust them to handle this sort of humor responsibly! Or, as a friend of mine put it to me yesterday on the subject: “What is some farmer in Iowa going to make of this?”

So, a couple of things. First, as the son of a farmer, I am pretty certain that not many of them are going to be looking at the covers of The New Yorker magazine. Second, while I can attest, from personal experience, that farmers believe in lots of stupid things (university professors too, for that matter, which is a topic for another day), it’s not quite clear to me what the mechanism of persuasion is supposed to be here. It’s one thing to believe that Barack Obama is really a Muslim, and was just going to that Christian church in Chicago for 20 years as a kind of front. But if you do believe that, it’s probably because a relative that you trust, or a blogger whose views you like, or an email from a friend, told you so. Not because someone sketched a caricature of him. We all share the same media culture, and we all use the same sorts of modality cues. Like for example: photograph—documentary account; cartoon—fictional account. Nobody, not even farmers, takes a cartoon as veridical evidence of anything. They might not understand it, but they’re not going to be convinced by it one way or another.

The thing is, at this point, I think the level of suspicion on the part of educated and liberal groups in this country—not even of the fundamental decency of their opponents, but of their basic intellectual competence—is now so strong that they seem to be able to imagine that ordinary rules of epistemological judgment no longer apply. And while I often fume about the elitism of the chattering classes, in this case I am a little more sympathetic: not to the particulars, but to the mistrust that helped spark the outcry. I just can’t get over the fact that some people (many of whom will be able to vote in this year's election) actually believe that a wealthy, preppy, Ivy-league educated lawyer is really a radical, American-hating terrorist in disguise: a conclusion that they’ve come to without any help from David Remnick.

Wednesday, July 9, 2008

APOLOGIA FOR MASS COMMUNICATION

There are few more discredited approaches to history than those that rely on nostalgia. Nostalgia is rightly condemned as bad historiography, and often as a kind of psychological malady. I'm very much 'with' this, and I think there are few things more likely to get me ticked off than a book/article/essay concerning the 'decline' of anything. In particular, I hate it when people talk about the decline of public intellectuals. But that's a whole different thing...

The potentially nostalgic idea I want to introduce here concerns mass communication. In particular, I'm interested in how mass communication may have been very good at doing some things, even if it was very bad at doing other things. We are very much accustomed to the 'bad' things about mass communication. Amongst other things, classic mass communication (think network television in the 1970s, or radio in the 1950s) is critiqued for being a leveling factor, a massifier, a social concretizer (so to speak), and as a top-down force that serves the interests of the (white, monied, U.S.) elites. Almost everyone knows these arguments. Few ideas about mass communication seem to be better distributed than the idea that they are bad because they are dumbed-down, lowest-common-denominator sludge.

I suggest that the vantage point of the early 21st century gives a good starting place for understanding what mass communication was (or is, or could have been). To warp Innis, it is only in the gathering dusk that Minerva's owl takes flight. Mass communication is not in the dominant position it was in (okay, that's arguable), and that gives us an opening to understand what mass communication was 'good' at.

So, what was mass communication good at? Perhaps not much. But two things seem apparent to me:

1) Mass communication was good at making publics. Joe Turow has dealt with this for the last 10 years. Implicitly or explicitly, Turow has done a good job showing us how new media are easily used to separate out audiences, thus undermining at least one potential thing that everyone could have in common. It used to be that almost everyone watched Lucille Ball on television and listened to Perry Como. Audiences are now fragmented and temporary collections of networks of people, and the lines between them are drawn up by people who are rewarded for splitting up culture in ways that allow for targeted selling. [brief note: I am occasionally struck by how deep of an influence George Gerbner had on Turow. Very telling in this stuff.]

2) Mass communication was also good at making counter-publics. The idea of the counter-public, as I dimly understand it, comes to us as a kind of elaboration on Habermas' notion of the public sphere. The critique of Habermas for a while was that his ideal of the public sphere didn't allow for the different kinds of oppositional publics that do not share the bourgeois settings or identities of the classic public sphere. [note: I don't want to get into this debate here] One weird thing about mass communication is that, because television and radio stations were so centralized, when there was anything new or different out there, this was relatively significant. To speak metaphorically, something like a two-party system ('dominant' and 'oppositional') became possible. To speak in examples: college and community radio mattered a lot more before the internet. Precisely because of the scarcity of stations, these radio stations were very effective at organizing audiences, and their position in the system of mass communication lent them a decidedly oppositional cast. At the same time that these oppositional radio stations mattered (roughly the 1980s and early 1990s), oppositional television stations were also culturally important. I'm thinking here of Channel Z in L.A., or New York City's whole community access television scene. There was an audience there, and the programmers of these independent stations were instrumental in pulling this freaky audience together.

Now, with internet-based modes of content distribution, the surfeit of 'alternative' voices means that the whole thing has become more muddled. It's the old saw: if everybody's somebody, then nobody's anybody. There are so many voices out there (and so little of a system for sorting them out), that opposition becomes almost meaningless in the cacophony. We've gone from a two-party system (with one party truly dominant) to a billion-option system that has undermined the coordination of cultural opposition. I doubt that this is a permanent or even terribly bad thing. But looking at this purely in terms of how audiences are coordinated, we see a true sense of disorganization in terms of oppositional culture (a culture that, strangely, benefited from its tenuous position in the heyday of mass communication).

Response to the Response, Part 1: History and New Media

This entry is going to be the first in a series of responses to Dave’s post of last week, which has got me to thinkin’:

First, about the noted tendency for many media historians to dismiss the importance of the Internet, or of new media generally, as just the same-old, same-old. I agree that this is annoying, and I also admit to doing some of it myself. I want to offer several different but compatible explanations for what I think might be going on here.

Most media historians, like most other academics, are geeky, and also very often insecure in their geekiness. Part of their insecurity comes from the awareness that they are experts in a subject in which almost all other people have very little interest, and regard as more or less useless. Hence, in their continual effort to prove their relevance, and also in order to preen before their fellow academics, obscure references to historical personages or events or technologies will inevitably pop up. “You tell me that the Internet is inherently democratic, and yet, don’t you know what Forysthe P. Wigglesworth III, nineteenth-American abolitionist, adventurer, and inventor of the epilecticoposcope, had to say about the utopian rhetoric surrounding the telegraph?” Followed by a (not terribly germane) quote from Wigglesworth, and a knowing smile.

Then too, the ready dismissal of new media’s significance is a cheap way to claim political sophistication, or a kind of old school radicalism: oh, I am just too, too historically aware to buy into all that hype.

A more justifiable rationale for this sort of argument, at least seven or eight years ago, when propaganda about the Internet was ubiquitous and rarely challenged, was simple weariness about the claims made on its behalf (Darin Barney has a nice quote at the beginning of his book Prometheus Wired from John Perry Barlow, in which Barlow calls digitized information “the most profound technological shift since the capture of fire.”) A reminder about historical perspective often felt apropos. Nowadays there is less need, although some of us (again I would probably include myself here) do tend to slip back into what has become something of a reflex response.

This is offered more in the spirit of explanation than exculpation. The pattern that Dave describes is intellectually lazy and boring, and because of this essentially helps makes the case for those many people who would rather we just forget all about history when talking about modern media technologies.

Saturday, July 5, 2008

Shaun Tan's "The Arrival"


I saw on Crooked Timber that Shaun Tan’s graphic novel, The Arrival, had received a prize from Locus, a Science Fiction literature site, which provoked this blog.
The Arrival
is a constructed around the experience of a newly-arrived immigrant (Tan is an Australian of Malaysian-Chinese descent) in a vaguely surreal city: people travel about in balloon-ships, there is an invented alphabet, and the main character is accompanied for most of the narrative by a sort of friendly tadpole-like creature. Tan's world reminded me a bit of the American children’s book artist David Wiesner. Like Wiesner, Tan tells his story without using words.

The conceit—the new world as a variation on Oz—may strike some readers as fey and little too precious, but it also, to my mind, highlights how certain media can provide us with a distinctive aesthetic experience. At some point in reading the book, I started to think about how a literary novel or even a movie could capture the feeling of strangeness and confusion and wonder and vague foreboding that is the experience of anyone encountering a radically different society. I couldn’t think of how it could be done as well as Tan has managed to do it here.


Monday, June 30, 2008

RESPONDING TO BREWIN'S TWO CLAIMS: [I AGREE WITH THEM]



So, a while back, my friend and co-blogger Mark Brewin took on the forces of evil, which had (presumably right before tying our heroine to the train tracks) made the internet out to be some kind of revolutionary and good, democratic thing. I'm going to follow him up on this, with a coda that presents my real ideas here.

Brewin offers two claims, and encouraged us to follow up on them:
CLAIM THE FIRST: "While [the internet] may reduce the importance of some forms of social inequality, it builds upon, perhaps even heightens, the importance of others." He takes as an example the supposedly democratic wikipedia, which in fact is pulled together not by an army of the hoi polloi, but by a relatively small group of people. Prospective wikipedia entries that don't fit the definitions of 'entry-worth' that are maintained by the unseen overlords simply don't make it online. Concludes Brewin: "Wiki as Internet elitism, disguised as Internet egalitarianism."

CLAIM THE SECOND: "At the same time, it would be misguided to ignore the ways in which the new media environment has increased the scope for human creativity, and opened up possibilities for human interaction that most of us couldn’t even have imagined as recently as ten years ago." Brewin mentions nothing in support of this, because he doesn't have to. The evidence is all over the place.

Right, Mark. Good. But where do we go when we pull these things together? There are lots of opportunities. Here is one idea and a meta-level observation:

ONE IDEA: The idea of visibility seems to be an important dimension in much of this. Think about it this way: wikipedia is an example of an online application that allows for new modes of visibility/revelation/publicness (in that it is relatively open to a relatively large number of people to post and edit entries), while the system by which this visibility is policed is itself not very visible. In this sense, there is an ambivalence of visibility about the whole thing, in that the act of concealment/occlusion is part of the revelation. Foucault made a big deal about how the 'eye of power' worked, and how individuals (and more broadly, the human sciences & individualism) were constituted in part through the subjects being available to power because they were monitored by power.

But here we see the reverse (and NOT the opposite) of this: we see how visibility is something that is used (and how concealment of visibility is used) by online interactants. That there is something being concealed at all times is not a difficult suspicion to maintain; it is an obvious Kenneth Burkean starting point. The question I pose is this: how persistent is this blend of visibility and concealment? Is this what it's going to be like for a while? I doubt it...

From a different angle, I think we see yet another instance of how we are, in Alvin Gouldner's terms, moving from a society organized by "the command," to a society organized around "the report." The wikipedia example shows us a snapshot of a societal arrangement that makes it so that facts (here in the obvious form of a compendium of facts that is obviously based on an encyclopedia model) matter, and reflection about how those facts attained their fact-y status is less easy to come by. And, as umpteenth journalism scholars have pointed out: fact-based reporting is VERY difficult to accomplish with a large number equal participants. If objective reporting survives as an ideal in newsrooms, it is at least partly because 'facts' lend themselves to the kind of hierarchical arrangements that newsrooms have created. Wikipedia and other less-than-directly-democratic online ventures show us that the facts go well with hierarchy. If you want democracy, you better be prepared for something much messier than facts.

META-LEVEL OBSERVATION: I have grown weary of attempts to understand these issues simply by identifying utopian and dystopian claims surrounding new media. Historians of the media often do this, and it often looks like this:
Premise 1: Old media were often described in utopian and dystopian terms. [examples provided]
Premise 2: This new media phenomenon is also described in utopian and dystopian terms. [examples provided]
Conclusion [with my own sarcastic use of un-grammar]: The computers is nothin' new.

This approach to history and new media is easy to find, and there are some serious problems with it. First: This approach is usually invoked in an appeal to get 'past' technological determinism (a la McLuhan). But in almost all cases, this technological determinism is chucked overboard in favor of a cultural determinism. And it's often a flavor of cultural determinism that makes it seem as if nothing ever changes. Speaking as someone who scoffs loudly and rudely at claims of novelty: there is new stuff, and the material (the technology, the economy, and other stuff) matters.

Second: This approach is often motivated by an approach to historiography that often goes unstated (indeed, some practitioners of it may be surprised to find it called an approach at all). It goes like this: I'm going to tell you about old and new media, but in order to do so I will ignore the media themselves and just tell you what a couple of elite newspapers have to say about those media. Those who have done media history know why researchers make this choice: newspapers are a heck of a lot easier to find (and understand) than other historical data. But I think it's time to get past the purely discursive, cultural, continuity-favoring approach to technology. But easier said than done.

Wednesday, April 30, 2008

Message-Force Multipliers and Panic at the Pump

Last Sunday’s New York Times published a massive, well-sourced piece of investigative journalism on news networks’ use of Pentagon-briefed “military analysts” to comment favorably on the war in Iraq. These retired high-ranking officers were called “message force multipliers,” and “surrogates," spinning war news to keep the intervention justified and the outlook positive. Though they looked like neutral experts, some of them also worked for defense industries as lobbyists or consultants, and the broadcasters failed to investigate or disclose a conflict of interest. More damning, the Pentagon apparently used public funds to propagandize the American people, which, believe it or not, is still illegal. Apparently the chairman of the Senate Committee on Armed Services is requesting an investigation. None of this is much of a surprise. It’s that bad old military-industrial complex, and the “corporate media” is just an industry in league with the folks getting rich selling big guns, exactly what Eisenhower warned us about in 1961.

That said, in his farewell address to the nation, the military-industrial complex was just one of the two threats to American democracy Eisenhower mentioned, and I almost never see reference to the second: the government’s relationship to research:

“Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers. The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.”

It’s easy to point to the fulfillment of Eisenhower’s forecast of the power of the military- industrial complex, but this second “threat” at first seems harder to nail down (except by those of us who smugly congratulate ourselves for eschewing “administrative” research). It’s hard to grasp how the general public was meant to understand it then, and how we can understand it now. This view of “free ideas” – as opposed to ideas that cost a lot and require validation with a patent or other “deliverable” – seems romantic but obsolete. However, when paired with the American dependence on oil, and the lengths to which we go to “protect our interests” in the Middle East, Eisenhower’s skeptical glance at “the power of money” in allocating research funds makes a lot of sense. The scholars he imagines are “hard” scientists – those who might have figured out some energy alternatives by now, had public money been aggressively allocated to this sort of research instead of war (and internally-targeted psy-ops programs put in place to justify war). As critical media scholars, we have some interest in examining the demise of the notion of “free ideas,” and wondering whether the education industry has long been covered by Eisenhower’s first warning, leaving the second warning elegantly phrased and interesting to read, but ultimately redundant.

Tuesday, April 1, 2008

Jesus Christ, Superman

Given my religious allegiances, and my amateur interest in the possibilities of sequential art (not to be confused with the actual expertise of our own pravdakid, of which see below), it was probably inevitable that I would end up directing readers to this story. And for those of you who aren’t into the whole “Christian” thing; the very notion of a British artist of African heritage using a Japanese variation on a medium generally associated with twentieth century American popular culture, in order to update a several thousand year-old text from the Middle East, should be interesting simply as an example of the emerging, global ecumene, if nothing else.

Friday, March 21, 2008

Enough with the talk of revolution, already

As an Internet skeptic, it always does my heart good to read pieces like Chris Wilson’s recent Slate article on Wikipedia and Digg and the much-hyped Web 2.0, a supposedly more democratic version of the Web (which, if you remember, was supposed to be a more democratic version of the older media forms it was going to replace.) Wilson points out that about one per cent of Wikipedia users account for over half its edits. As for Digg, turns that that many of its top stories—44 percent in 2007, 56 percent in 2006—were submitted by only 100 people. When the site tried to fiddle with its algorithm, in order to reduce the influence of the top contributors, they threatened to boycott. “Despite the fairy tales about the participatory culture of Web 2.0,” writes Wilson, “direct democracy isn’t feasible at the scale on which these sites operate.”

If I had blogged this entry when it first appeared (which is what I meant to do), I would have ended it about here, with some snide comment to the effect of “how are the cyper-utopians were going to try to spin away Wilson’s claims?” Then I came across a link to another piece by Nicholson Baker, on the manner in which Wikipedia deletes subjects it deems unimportant, and his efforts to protect threatened entries: “I found press citations and argued for keeping the Jitterbug telephone, a large-keyed cell phone with a soft earpiece for elder callers; and Vladimir Narbut, a minor Russian Acmeist poet whose second book, Halleluia, was confiscated by the police.” Although I’m not always Mr. Baker’s biggest fan, nonetheless his protests against the administrators’ dicta seemed to fit into the general theme I was working on—Wiki as Internet elitism, disguised as Internet egalitarianism. So I googled the article, hoping to find more fodder for my argument. Turns out Baker is not anti-Wiki. The piece begins, in fact, as follows: “Wikipedia is just an incredible thing.” Reviewing a manual by John Broughton on how to make the most of one’s Wikipedia experience, Baker celebrates the democratic feel of the site. That is, democratic in a sort of Bahktinian, medieval carnivalesque sort of way: Wikipedia is, we are told, “fact-encirclingly huge, and it’s idiosyncratic, careful, messy, funny, shocking, and full of simmering controversies.” Baker’s complaints, such as they are (and even he admits that some entries deserve deletion), are made with an eye to getting more participation, so that other folks will join with him, in his defense of the obscure and the trivial.

(Your point being, perfessor?)

(Uh, not much, I guess. Maybe just this:)

The Internet is not inherently democratic: not in its 2.0 version or any version that is likely to appear in our lifetimes. While it may reduce the importance of some forms of social inequality, it builds upon, perhaps even heightens, the importance of others. At the same time, it would be misguided to ignore the ways in which the new media environment has increased the scope for human creativity, and opened up possibilities for human interaction that most of us couldn’t even have imagined as recently as ten years ago. It would be nice to see more media scholars working at the intersection of those two claims.

Tuesday, March 18, 2008

Michel Gondry's Vision of Media Democracy


Michel Gondry's most recent film, Be Kind Rewind, may be one of the most poorly-reviewed movies of the last few months. Critics have generally adhered to the notion that Gondry works best with someone else handling the screenwriting (often citing The Eternal Sunshine of the Spotless Mind as a successful collaboration between Gondry and Charlie Kaufman). Though I'm now writing my second consecutive post about 'how the critics are wrong,' I do beg to differ.

Be Kind Rewind directly concerns ideas that are of direct concern to almost any theorist of the media. The movie takes the form of a melodrama (complete with a Mr. Pinchpenny character and a threatened town landmark) wherein the crew at a video store that features only VHS movies begins to make their own versions of the movies they rent out, putting them in conflict with regulatory agencies and big media. (far better plot synopses are available; go find one if you want) The theme of the movie--perhaps stated in an artlessly bold manner--is that we should like it when members of a culture (alone and, especially, in locally-based groups) make their own 'stuff'. Who can make a better version of Ghostbusters? Some jamochs from New Jersey. Who can tell the life story of Fats Waller? The people. It's a blatantly populist film (another weakness, perhaps), and it resonates with much of the John Dewey-derived theory and research concerning organic community-building through the media.

As someone who generally rankles at populist appeals (even a defender of professionalism), I find the movie compelling not because of its pitting of big corporations against 'the people', but because of its less blatant celebration of making movies. Gondry once made a short film about himself called (if memory serves) "I'm Always 12". Right. The movie is largely about getting in touch with the kind of enthusiasm me and my friends had when we made 8 mm movies (most of them Raiders of the Lost Ark rip-offs) in my suburban neighborhood in the early 1980s. This is, perhaps, a kind of weakness of the movie (after all, not all people have access to media-making tools). But, taken as a utopian vision--which is I think appropriate--Be Kind Rewind pictures a world where everyone is working in front of and behind the camera. And I must admit, I'm with the Deweyans on this one. Bravo, Gondry! At the very least (and I mean the VERY least), you've made a case for community-based media that actually celebrates the potential, without getting bogged down in the dilemmas we have faced so far.

There's much more to be said here, about Be Kind Rewind's potential relevance to the whole "Youtube" phenomenon, but I will let that wait for a different day.

Monday, March 17, 2008

The Maus Effect: Comic Book Criticism and The Middle-Brow


It's a well-worn story by now. The comic book started off as a truly maligned product of the Great Depression. After World War II, comic books in the U.S. explored a number of different genres. One of the most popular was the crime comic book. With their abundant violence and apparent celebration of a criminal lifestyle, crime comics became a kind of stand-in for the whole medium, and an increasingly well-organized attack on comic books eventually (with a number of ironic reversals along the way) took the form of the Comic Book Code in 1954. The Comic Book Code a truly demanding self-censorship code that (along with the introduction of television) relegated comic books to the lowbrow. Flash forward thirty years, and we find in the 1980s the introduction of the comic book reborn: the graphic novel.

The graphic novel was (and remains) the new, more adult comic book. Graphic novels wore their cultural aspirations on their mylar sleeves, with fancy art and 'adult' themes. There is probably no better example of what the 'graphic novel' was about than Art Spiegelman's MAUS. Released in the late 1980s, MAUS enjoyed glorious praise for addressing the Holocaust through the form of a comic book. It was widely reviewed in magazines and newspapers throughout the U.S. (and would win a Pulitzer Prize for literature). The classic review went somewhat as follows: "We all know that comic books used to be about superheroes and silliness. This new graphic novel is about the Holocaust, and shows us that comic books can be better than we ever thought. MAUS has, effectively, breathed new life into the medium of comix."

For the critics, MAUS became nothing less than the redemption of a medium (mind you: a medium that few of those critics had bothered to notice before they picked up MAUS). I have nothing bad to say about MAUS, but it is difficult for any single work (graphic novel or whatever) to prop up an entire medium, and this reaction to MAUS has become a broader theme in comic book criticism.

So, when newspapers and magazines (I'm thinking in particular of the NY Times book review) feature reviews of comic books, their reviews often favor those works that seem least like what comic books supposedly used to be. It gets frustrating, seeing the same old "comic books aren't just for kids anymore" observation 20 years after the 'MAUS moment'.

The recent graphic novel that most clearly benefited from the fawning MAUS effect on comic book criticism was Persepolis, by Marjane Satrapi. For those who do not know, Persepolis is an autobiographical tale about the author's growing up in pre-revolution Iran, moving to Switzerland during the Iran-Iraq war, and returning to Iran afterward. It has recently been turned into a (more or less well-reviewed) feature film.

Something bugs me about this, but before I get on what bugs me, allow me to set the record straight: I think MAUS is great. I also very much enjoyed Persepolis in graphic novel and movie form. But even though I find myself in agreement with the consensus that these are amongst the salvageable (even essential) graphic novels of our time, I cringe when I read the reviews of these works. Because what I see in the criticism involving these works is a kind of lazy criticism, which relies on assumptions that don't work. Specifically:
a) the assumption that comic books have always been 'bad', and then got saved when critics started noticing them. Why do I disagree? Because it turns about 50 years of comic book art into something that can be comfortably blown off. This, in turn, buys into familiar (and all-too-easily mocked) assumptions about the separateness of high and low culture.
b) the assumption that 'serious' themes make great art. This is classic middle-brow stuff. As if the "very special episode" of tv's "Facts of Life" is probably going to be better art than other episodes of "Facts of Life." Or as if Jerry Lewis's almost unseen movie about the Holocaust is better than other Jerry Lewis movies. Giving a pass to culture that deals with 'issues' is not good criticism. Points for good intentions can be saved for different arenas. It only insults comics all the more when critics fawn over 'issues'-related graphic novels.

So: do I HATE Persepolis? No. I thought it was pretty good. But I find it disappointing that commentary about it (as graphic novel and as movie) dealt with the supposed novelty of graphic novels that touch on serious themes, instead of dealing with: the quality of the draftsmanship, the writing of the dialogue, the pacing of the story, the decision to use black & white, the autobiographical mode in comics in general, Satrapi's own voice, and much else.

Talk about getting upset about small things...

Wednesday, February 13, 2008

LOFTY TITLE AND ALL...




In this, the inaugural post for Perpetual Ferment, I would like to outline who we are and what we're trying to do with this blog.

Who we are: Fernando Bermejo, Mark Brewin, Jen Horner, and Dave Park. Those are the names.

What we are trying to do: discuss the media in a blog-worthy way. The emphasis will be on media theory, research, and criticism. The tone (appropriate for a blog, methinks) will be relatively informal. A key word for me, and I believe for my co-authors here as well, will be 'autonomy.' We are not trying to represent any particular point of view in media studies. We are, instead, trying to keep our own ideas dynamic by discussing issues that matter to us, and that provoke disagreement amongst us.

With our titular reference to "Ferment in the Field," we position ourselves (pretentiously, perhaps) as folks who are using this blog as a way to think past some of the central positions oft advanced in media studies. We hope to get beyond some of the internecine squabbles that have defined the field of media studies: qualitative vs. quantitative, critical vs. administrative, political economy vs. cultural studies, and so on.

If this sounds like a scattershot and amateurish idea for a media studies blog, so be it. The study of the media could use some more scattershot and amateurish ideas.

Excelsior.