Wednesday, July 16, 2008

Obligatory New Yorker post

Feels like it’s already old news, with the backlash to the backlash going full strength, but here’s my two cents’ worth anyhow.

The unspoken heart of the debate is that lots of people are simply too stupid to read cartoons. Specifically, New Yorker cartoons. Which are, let’s face it, often kind of hard to figure out: Seinfeld managed a whole episode on this. You need to do some work on them. You often need a not inconsiderable amount of social capital to figure out what they’re alluding to. This is part of their appeal. At least for me. I kind of like patting myself on the back when I finally get the joke, and if I were completely honest, I’d have to say that I also enjoy the vague feeling of superiority I get knowing that some folks just wouldn’t get it, if they were of a mind to read the damned thing (which many of them are not).

This is relatively mild snobbery, all things considered, made even milder by the sorts of subjects that New Yorker cartoons often address: trophy wives, jokes made at the expense of cultural stereotypes about cowboys and science fiction movies, etc. But this year, attitudes toward the often silly controversies of Presidential-year politics, which tend to be standard fair for satirists, have changed because of the perceived stakes. Don’t you understand? The response to accusations of humourlessness seems to be, There are whole wide swathes of idiocy growing out in the Heartland right now! They don’t believe in evolution! They elected a cretin to fill the position of most powerful human being on the planet! Then they re-elected him!! We just can’t trust them to handle this sort of humor responsibly! Or, as a friend of mine put it to me yesterday on the subject: “What is some farmer in Iowa going to make of this?”

So, a couple of things. First, as the son of a farmer, I am pretty certain that not many of them are going to be looking at the covers of The New Yorker magazine. Second, while I can attest, from personal experience, that farmers believe in lots of stupid things (university professors too, for that matter, which is a topic for another day), it’s not quite clear to me what the mechanism of persuasion is supposed to be here. It’s one thing to believe that Barack Obama is really a Muslim, and was just going to that Christian church in Chicago for 20 years as a kind of front. But if you do believe that, it’s probably because a relative that you trust, or a blogger whose views you like, or an email from a friend, told you so. Not because someone sketched a caricature of him. We all share the same media culture, and we all use the same sorts of modality cues. Like for example: photograph—documentary account; cartoon—fictional account. Nobody, not even farmers, takes a cartoon as veridical evidence of anything. They might not understand it, but they’re not going to be convinced by it one way or another.

The thing is, at this point, I think the level of suspicion on the part of educated and liberal groups in this country—not even of the fundamental decency of their opponents, but of their basic intellectual competence—is now so strong that they seem to be able to imagine that ordinary rules of epistemological judgment no longer apply. And while I often fume about the elitism of the chattering classes, in this case I am a little more sympathetic: not to the particulars, but to the mistrust that helped spark the outcry. I just can’t get over the fact that some people (many of whom will be able to vote in this year's election) actually believe that a wealthy, preppy, Ivy-league educated lawyer is really a radical, American-hating terrorist in disguise: a conclusion that they’ve come to without any help from David Remnick.

Wednesday, July 9, 2008

APOLOGIA FOR MASS COMMUNICATION

There are few more discredited approaches to history than those that rely on nostalgia. Nostalgia is rightly condemned as bad historiography, and often as a kind of psychological malady. I'm very much 'with' this, and I think there are few things more likely to get me ticked off than a book/article/essay concerning the 'decline' of anything. In particular, I hate it when people talk about the decline of public intellectuals. But that's a whole different thing...

The potentially nostalgic idea I want to introduce here concerns mass communication. In particular, I'm interested in how mass communication may have been very good at doing some things, even if it was very bad at doing other things. We are very much accustomed to the 'bad' things about mass communication. Amongst other things, classic mass communication (think network television in the 1970s, or radio in the 1950s) is critiqued for being a leveling factor, a massifier, a social concretizer (so to speak), and as a top-down force that serves the interests of the (white, monied, U.S.) elites. Almost everyone knows these arguments. Few ideas about mass communication seem to be better distributed than the idea that they are bad because they are dumbed-down, lowest-common-denominator sludge.

I suggest that the vantage point of the early 21st century gives a good starting place for understanding what mass communication was (or is, or could have been). To warp Innis, it is only in the gathering dusk that Minerva's owl takes flight. Mass communication is not in the dominant position it was in (okay, that's arguable), and that gives us an opening to understand what mass communication was 'good' at.

So, what was mass communication good at? Perhaps not much. But two things seem apparent to me:

1) Mass communication was good at making publics. Joe Turow has dealt with this for the last 10 years. Implicitly or explicitly, Turow has done a good job showing us how new media are easily used to separate out audiences, thus undermining at least one potential thing that everyone could have in common. It used to be that almost everyone watched Lucille Ball on television and listened to Perry Como. Audiences are now fragmented and temporary collections of networks of people, and the lines between them are drawn up by people who are rewarded for splitting up culture in ways that allow for targeted selling. [brief note: I am occasionally struck by how deep of an influence George Gerbner had on Turow. Very telling in this stuff.]

2) Mass communication was also good at making counter-publics. The idea of the counter-public, as I dimly understand it, comes to us as a kind of elaboration on Habermas' notion of the public sphere. The critique of Habermas for a while was that his ideal of the public sphere didn't allow for the different kinds of oppositional publics that do not share the bourgeois settings or identities of the classic public sphere. [note: I don't want to get into this debate here] One weird thing about mass communication is that, because television and radio stations were so centralized, when there was anything new or different out there, this was relatively significant. To speak metaphorically, something like a two-party system ('dominant' and 'oppositional') became possible. To speak in examples: college and community radio mattered a lot more before the internet. Precisely because of the scarcity of stations, these radio stations were very effective at organizing audiences, and their position in the system of mass communication lent them a decidedly oppositional cast. At the same time that these oppositional radio stations mattered (roughly the 1980s and early 1990s), oppositional television stations were also culturally important. I'm thinking here of Channel Z in L.A., or New York City's whole community access television scene. There was an audience there, and the programmers of these independent stations were instrumental in pulling this freaky audience together.

Now, with internet-based modes of content distribution, the surfeit of 'alternative' voices means that the whole thing has become more muddled. It's the old saw: if everybody's somebody, then nobody's anybody. There are so many voices out there (and so little of a system for sorting them out), that opposition becomes almost meaningless in the cacophony. We've gone from a two-party system (with one party truly dominant) to a billion-option system that has undermined the coordination of cultural opposition. I doubt that this is a permanent or even terribly bad thing. But looking at this purely in terms of how audiences are coordinated, we see a true sense of disorganization in terms of oppositional culture (a culture that, strangely, benefited from its tenuous position in the heyday of mass communication).

Response to the Response, Part 1: History and New Media

This entry is going to be the first in a series of responses to Dave’s post of last week, which has got me to thinkin’:

First, about the noted tendency for many media historians to dismiss the importance of the Internet, or of new media generally, as just the same-old, same-old. I agree that this is annoying, and I also admit to doing some of it myself. I want to offer several different but compatible explanations for what I think might be going on here.

Most media historians, like most other academics, are geeky, and also very often insecure in their geekiness. Part of their insecurity comes from the awareness that they are experts in a subject in which almost all other people have very little interest, and regard as more or less useless. Hence, in their continual effort to prove their relevance, and also in order to preen before their fellow academics, obscure references to historical personages or events or technologies will inevitably pop up. “You tell me that the Internet is inherently democratic, and yet, don’t you know what Forysthe P. Wigglesworth III, nineteenth-American abolitionist, adventurer, and inventor of the epilecticoposcope, had to say about the utopian rhetoric surrounding the telegraph?” Followed by a (not terribly germane) quote from Wigglesworth, and a knowing smile.

Then too, the ready dismissal of new media’s significance is a cheap way to claim political sophistication, or a kind of old school radicalism: oh, I am just too, too historically aware to buy into all that hype.

A more justifiable rationale for this sort of argument, at least seven or eight years ago, when propaganda about the Internet was ubiquitous and rarely challenged, was simple weariness about the claims made on its behalf (Darin Barney has a nice quote at the beginning of his book Prometheus Wired from John Perry Barlow, in which Barlow calls digitized information “the most profound technological shift since the capture of fire.”) A reminder about historical perspective often felt apropos. Nowadays there is less need, although some of us (again I would probably include myself here) do tend to slip back into what has become something of a reflex response.

This is offered more in the spirit of explanation than exculpation. The pattern that Dave describes is intellectually lazy and boring, and because of this essentially helps makes the case for those many people who would rather we just forget all about history when talking about modern media technologies.

Saturday, July 5, 2008

Shaun Tan's "The Arrival"


I saw on Crooked Timber that Shaun Tan’s graphic novel, The Arrival, had received a prize from Locus, a Science Fiction literature site, which provoked this blog.
The Arrival
is a constructed around the experience of a newly-arrived immigrant (Tan is an Australian of Malaysian-Chinese descent) in a vaguely surreal city: people travel about in balloon-ships, there is an invented alphabet, and the main character is accompanied for most of the narrative by a sort of friendly tadpole-like creature. Tan's world reminded me a bit of the American children’s book artist David Wiesner. Like Wiesner, Tan tells his story without using words.

The conceit—the new world as a variation on Oz—may strike some readers as fey and little too precious, but it also, to my mind, highlights how certain media can provide us with a distinctive aesthetic experience. At some point in reading the book, I started to think about how a literary novel or even a movie could capture the feeling of strangeness and confusion and wonder and vague foreboding that is the experience of anyone encountering a radically different society. I couldn’t think of how it could be done as well as Tan has managed to do it here.


Monday, June 30, 2008

RESPONDING TO BREWIN'S TWO CLAIMS: [I AGREE WITH THEM]



So, a while back, my friend and co-blogger Mark Brewin took on the forces of evil, which had (presumably right before tying our heroine to the train tracks) made the internet out to be some kind of revolutionary and good, democratic thing. I'm going to follow him up on this, with a coda that presents my real ideas here.

Brewin offers two claims, and encouraged us to follow up on them:
CLAIM THE FIRST: "While [the internet] may reduce the importance of some forms of social inequality, it builds upon, perhaps even heightens, the importance of others." He takes as an example the supposedly democratic wikipedia, which in fact is pulled together not by an army of the hoi polloi, but by a relatively small group of people. Prospective wikipedia entries that don't fit the definitions of 'entry-worth' that are maintained by the unseen overlords simply don't make it online. Concludes Brewin: "Wiki as Internet elitism, disguised as Internet egalitarianism."

CLAIM THE SECOND: "At the same time, it would be misguided to ignore the ways in which the new media environment has increased the scope for human creativity, and opened up possibilities for human interaction that most of us couldn’t even have imagined as recently as ten years ago." Brewin mentions nothing in support of this, because he doesn't have to. The evidence is all over the place.

Right, Mark. Good. But where do we go when we pull these things together? There are lots of opportunities. Here is one idea and a meta-level observation:

ONE IDEA: The idea of visibility seems to be an important dimension in much of this. Think about it this way: wikipedia is an example of an online application that allows for new modes of visibility/revelation/publicness (in that it is relatively open to a relatively large number of people to post and edit entries), while the system by which this visibility is policed is itself not very visible. In this sense, there is an ambivalence of visibility about the whole thing, in that the act of concealment/occlusion is part of the revelation. Foucault made a big deal about how the 'eye of power' worked, and how individuals (and more broadly, the human sciences & individualism) were constituted in part through the subjects being available to power because they were monitored by power.

But here we see the reverse (and NOT the opposite) of this: we see how visibility is something that is used (and how concealment of visibility is used) by online interactants. That there is something being concealed at all times is not a difficult suspicion to maintain; it is an obvious Kenneth Burkean starting point. The question I pose is this: how persistent is this blend of visibility and concealment? Is this what it's going to be like for a while? I doubt it...

From a different angle, I think we see yet another instance of how we are, in Alvin Gouldner's terms, moving from a society organized by "the command," to a society organized around "the report." The wikipedia example shows us a snapshot of a societal arrangement that makes it so that facts (here in the obvious form of a compendium of facts that is obviously based on an encyclopedia model) matter, and reflection about how those facts attained their fact-y status is less easy to come by. And, as umpteenth journalism scholars have pointed out: fact-based reporting is VERY difficult to accomplish with a large number equal participants. If objective reporting survives as an ideal in newsrooms, it is at least partly because 'facts' lend themselves to the kind of hierarchical arrangements that newsrooms have created. Wikipedia and other less-than-directly-democratic online ventures show us that the facts go well with hierarchy. If you want democracy, you better be prepared for something much messier than facts.

META-LEVEL OBSERVATION: I have grown weary of attempts to understand these issues simply by identifying utopian and dystopian claims surrounding new media. Historians of the media often do this, and it often looks like this:
Premise 1: Old media were often described in utopian and dystopian terms. [examples provided]
Premise 2: This new media phenomenon is also described in utopian and dystopian terms. [examples provided]
Conclusion [with my own sarcastic use of un-grammar]: The computers is nothin' new.

This approach to history and new media is easy to find, and there are some serious problems with it. First: This approach is usually invoked in an appeal to get 'past' technological determinism (a la McLuhan). But in almost all cases, this technological determinism is chucked overboard in favor of a cultural determinism. And it's often a flavor of cultural determinism that makes it seem as if nothing ever changes. Speaking as someone who scoffs loudly and rudely at claims of novelty: there is new stuff, and the material (the technology, the economy, and other stuff) matters.

Second: This approach is often motivated by an approach to historiography that often goes unstated (indeed, some practitioners of it may be surprised to find it called an approach at all). It goes like this: I'm going to tell you about old and new media, but in order to do so I will ignore the media themselves and just tell you what a couple of elite newspapers have to say about those media. Those who have done media history know why researchers make this choice: newspapers are a heck of a lot easier to find (and understand) than other historical data. But I think it's time to get past the purely discursive, cultural, continuity-favoring approach to technology. But easier said than done.

Wednesday, April 30, 2008

Message-Force Multipliers and Panic at the Pump

Last Sunday’s New York Times published a massive, well-sourced piece of investigative journalism on news networks’ use of Pentagon-briefed “military analysts” to comment favorably on the war in Iraq. These retired high-ranking officers were called “message force multipliers,” and “surrogates," spinning war news to keep the intervention justified and the outlook positive. Though they looked like neutral experts, some of them also worked for defense industries as lobbyists or consultants, and the broadcasters failed to investigate or disclose a conflict of interest. More damning, the Pentagon apparently used public funds to propagandize the American people, which, believe it or not, is still illegal. Apparently the chairman of the Senate Committee on Armed Services is requesting an investigation. None of this is much of a surprise. It’s that bad old military-industrial complex, and the “corporate media” is just an industry in league with the folks getting rich selling big guns, exactly what Eisenhower warned us about in 1961.

That said, in his farewell address to the nation, the military-industrial complex was just one of the two threats to American democracy Eisenhower mentioned, and I almost never see reference to the second: the government’s relationship to research:

“Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers. The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.”

It’s easy to point to the fulfillment of Eisenhower’s forecast of the power of the military- industrial complex, but this second “threat” at first seems harder to nail down (except by those of us who smugly congratulate ourselves for eschewing “administrative” research). It’s hard to grasp how the general public was meant to understand it then, and how we can understand it now. This view of “free ideas” – as opposed to ideas that cost a lot and require validation with a patent or other “deliverable” – seems romantic but obsolete. However, when paired with the American dependence on oil, and the lengths to which we go to “protect our interests” in the Middle East, Eisenhower’s skeptical glance at “the power of money” in allocating research funds makes a lot of sense. The scholars he imagines are “hard” scientists – those who might have figured out some energy alternatives by now, had public money been aggressively allocated to this sort of research instead of war (and internally-targeted psy-ops programs put in place to justify war). As critical media scholars, we have some interest in examining the demise of the notion of “free ideas,” and wondering whether the education industry has long been covered by Eisenhower’s first warning, leaving the second warning elegantly phrased and interesting to read, but ultimately redundant.

Tuesday, April 1, 2008

Jesus Christ, Superman

Given my religious allegiances, and my amateur interest in the possibilities of sequential art (not to be confused with the actual expertise of our own pravdakid, of which see below), it was probably inevitable that I would end up directing readers to this story. And for those of you who aren’t into the whole “Christian” thing; the very notion of a British artist of African heritage using a Japanese variation on a medium generally associated with twentieth century American popular culture, in order to update a several thousand year-old text from the Middle East, should be interesting simply as an example of the emerging, global ecumene, if nothing else.