The Regulation of the Chinese Blogosphere

This is another in a series of blog posts produced by the PhD students in my Public Intellectuals seminar being taught through USC's Annenberg School of Communication and Journalism.

image003

The Regulation of the Chinese Blogosphere

by Yang Chen

On September 9, the highest court and prosecution office claims that non-factual posts on social media that have been viewed more than 5,000 times, or forwarded more than 500 times, could be regarded as serious defamation and result in up to three years in prison.


This new law reflects the tense relationship between the government and the emerging and yet proliferating online public sphere. As one of the 500 million registered users on Weibo (the most popular tweet-like microblog in China), I feel a hint of nervousness. Normally my posts would be read around 500 times - which is far less than the 5000 quota – but Weibo is an open space where anyone can view and comment on any posts. Thus I have to be much more cautious about what I post in order to keep myself out of trouble.


I hope you won’t ridicule my timidity. Everybody has to be cautious, because the first account user who got arrested for violating this new law was an ordinary 16-year-old schoolboy, whose posts questioned the police’s negative act in a case and a conflict of interest in the court (Further information, go to China detains teenager over web post amid social media crackdown). But other than this poor boy from Junior School, there are a group of people who are much more nervous towards this law – the Big Vs.


Who are the Big Vs? Big Vs are the opinion leaders who actively engage in the discussion of political, economic, and social issues online. These prominent figures are followed by more than a hundred thousand netizens on Weibo. Unlike other grassroots users’ hidden identities, these users are verified by the website with their real names and occupations, and there is a gold “V” mark beside their account names that stands for “verified.”


image004


Because these Big Vs are followed by a considerate number of Weibo accounts, their posts or reposts can reach a much larger audience than that of grassroots user accounts. As a matter of fact, though verified accounts only represent 0.1% of the Weibo accounts, almost half of the hot posts (posts being commented more than 1,000 times) were written by them. Thus instead of a We-media platform, Weibo is more like a "speaker's corner" for the Big Vs; their posts easily get reposted and commented more than ten thousand times. Although everyone has the same rights of free speech on Weibo, some people like the Big Vs speak much louder than the others.


Of course, with real identities and huge popularity online, they are also much easier target for this new law. Let’s take a brief look of what happened to some of the big Vs recently.


image005


Most Big Vs are Chinese venture capitalists and investors; they would put their properties at risk if they go against the government. Thus not surprisingly, there has been an inclination that the Big Vs chose to cooperate with the government.


image006


After an account is verified and branded with a “V,” the website fits the account into categories such as education, entertainment, business, and media. The verified account enters the “House of Fame” under that certain category, and be recommended to general accounts which are relevant to that category. This move leads to closer connections among the people under the particular category and would simultaneously distance people in the other categories.


Earlier this year, the website has asked all users to fill in their education backgrounds and the newcomers to register with their phone number. This move would also allow the website to identity users’ background information and recommend them to people who have similar backgrounds. As a result, highly educated individuals are communicating with other highly educated individuals; individuals with lower education, with lower educated individuals.


Due to this classification, a user who follows a verified Weibo account will recommend the verified account to members within their groups, so people end up following the same verified accounts. This system creates information barriers. For instance, the likelihood that a high-educated member will recommend a verified account with lots of helpful and accurate information to a lower educated member who is in another group is slim. The lower educated member may never be given the chance to increase his or her access to information, although both are using the same networking service.


Users are also separated by geographical location. Individuals from northern regions are speaking to individuals also from northern regions; individuals from southern regions, to individuals from southern regions. Each user is matched into groups based on the user’s characteristics and is subject to an environment where the user can only meet other users similar to the user. From this process, these groups are drifting further and further apart from one another.


Not surprisingly, I have found out that users from outside the country also are segregated from domestic users as well. When I first come to US, I have registered a Weibo account using my U.S. mobile phone number. I found out my posts have been deleted very often secretly without any explanation from the website. It is even more ridiculous that on my personal page, everything looks fine, but on my followers’ page, these posts secretly disappeared. If my friend had not told me, I would never have known.


A screenshot from My follower’s page


image009副本
The Screen Shot from My Page


As I have shown, the post in the red circle was shown on my personal page, but deleted in my follower’s page. I found the similarity of my “deleted” posts: all of them having the common word “activity,” since I were spreading the information about USC’s upcoming events – some of these events are not even related to China or Chinese regime. Because some of these posts were deleted the second after I posted them, I guessed that a strong automatic filter system was applied to my account - maybe because my U.S. mobile put me into a more sensitive position. I was right! After I changed my mobile number into a Chinese domestic number, I never encountered another deletion. The segregation is really simple, yet effective; there’s no doubt that the censor system creates more information barriers.


The big Vs constitute the verified accounts that each followed by millions of people, that make them serve as the “links” among different groups. Controlling these links means further isolating the different groups and getting a tight grip on the information flow on Weibo.


The purpose of the policy maker is to develop a regulated and peaceful internet public sphere. However, we should bear in mind that the word “peace” doesn’t equal  “quietness” or “weakening voices.” There are obviously problems to be solved, voices to be heard. If tears were burried deep in one’s heart, it doesn’t mean the wound is not there anymore. I will end this blog with an old saying in China, “防民之口,甚于防川:” it means if you trap water in a stream, there would be a disastrous flood; if you shut up voices from the public, a worse disaster would be waiting ahead.The old saying is from thousands of years ago, but the words transcend time and still apply today; the Chinese regime should still take lessons from the wit of our ancestors.

Information Darwinism

This blog post was produced by one of the students in my PhD seminar on Public Intellectuals, currently being taught at USC's Annenberg School of Communication and Journalism. Information Darwinism by David Jeong

The brain craves information. Individuals demonstrate high preference for novel, highly interpretable visual information (Biederman & Vessel, 2006). This preference stems from an evolutionary advantage that an information-rich stimuli/image/environment would provide over a barren environment. Neuroscientists have even provided evidence we have a bias for irregular, non-singular shapes/curved cylinders over regular, singular shapes/cylinders (Amir, Biederman, & Hayworth, 2011). Simply, human beings are not carnivores or omnivores-- rather, we are info-vores. And oh boy, do we have a lot of information-- we can presently access more information than ever before in our evolutionary history (I hope I can make this claim?).

Since our brains evolved to solve the problems of our ancestral environments (Cosmides & Tooby, 1992), we may be experiencing a capacity load crisis in the amount of information we can remember, understand, or care about. Whether intentionally or not, we are constantly sifting through information in our environment-- we always have, not just in present day. My main argument is that when we have as much information at our disposal as we have today, there must be casualties.

One type of information that does seem to thrive is novel information-- we are constantly sharing and re-distributing "original content". It is no coincidence that we receive pleasure from new information. Competitive Learning Theory, otherwise known as "Neural Darwinism", occurs when strongly-activated neurons among a network of activated neurons inhibit the future activity of moderately-activated neurons upon recurring presentations of an image (Grossberg, 1987). The strongest-activated neurons dominate these future perceptions of a particular image, resulting in a net reduction of neural activity. This means that neurons prefer novel stimuli because they have yet to undergo Neural Darwinism.

The information in the current media sphere seems to also be undergoing its own version of what I will refer to as "Information Darwinism":

* Given two forms of information, the novel information will dominate over the replicated. * Given two forms of information, the simple information will dominate over the complex. * Given two forms of information, the visually appealing will dominate over the neutral. * Given two forms of information, the humorous (which also implies novelty) will dominate over the banal. You get the picture.

//*Note* Of course, novel information does not always reign supreme. Nostalgia and familiarity are counter-examples of this pattern. That said, nostalgia would not be nostalgia if it was pushed to our attention daily. Nostalgic content can only become effective through intervals of inattention.//

We have a bias for the fantastic, the amazing, the horrible, and disastrous. Most of the time, we are not interested in what occurs most of the time. We disregard the status quo.

What I mean by Informational Darwinism is that amidst the massive amount of information being pushed into our brains, we are witnessing an information-based natural selection where novel, simple, and visually appealing information dominates.

Not only are shorter, simplified forms of information (memes, Twitter updates, Facebook statuses) winning out, these forms of information champion novelty (original content, humor), and visual appeal. These "predators" are feasting on information that maintains a degree of persistence, permanence, and god forbid-- patience. Public discussion of climate change, ongoing conflicts overseas, inner-city poverty, and our tremendously dysfunctional health care industry are simply being driven to "extinction".

Tversky's and Kahneman's (1982) availability heuristic suggest we attribute greater probability and frequency to information that is more readily available in our minds. Perhaps the more troubling issue is the potential for a naturalistic fallacy to take place: that the survival of the fittest indeed yields the "fittest". Ultimately, "fitness" should refer to physical survival -- and indeed, accurate and proper communication of health and political issues do indeed have implications for life/death-- but I feel it also encapsulates physical and mental health, financial stability, and any domain of social life that represents a form of success. As such, "fitness" here refers to the positive impact on the most number of people-- regardless of race, gender, nationality, religion, and the like. In other words, we may be fooling ourselves to think that the information that our mind's eye is attending to is indeed the information most worthy of our attention.

The information that survives is information that garners our collective attention, that captivates the collective consciousness. This information may be biased, inaccurate, or may simply be fictional content intended for entertainment-- which is not to say that such information is meaningless as it represents the social reasons for sharing information in "spreadable media" (Jenkins et al., 2012).

So, not only are we wired to prefer this attention-grabbing information, this attention-grabbing information is concurrently being reproduced and shared at the expense and demise of information that is less attention-grabbing.

Problem: We have already been primed with much of the important information in the world. Another Problem A: Less attention-grabbing information tends to be information we already know, information that is complex. Another Problem B: Important information tends to be information we already know, which tends to be less attention grabbing. We know diet coke is bad, we know much of the Middle East is under various sorts of turmoil and conflict, we know, we know. We just can't bring ourselves to care about this information more than the next episode of Breaking Bad, or the top post on the front page of Reddit.

This is not to say that Breaking Bad offers less desirable information or a less desirable mode of delivery. In fact, its writers demonstrated an example of a truly complex form of narrative that goes against the traditional and familiar TV narrative. It is precisely its creativity and originality that makes it a champion of TV ratings and our collective consciousness.

That said, annual re-runs of Breaking Bad-- while remaining strong in popularity, will inevitably decline in ratings and our collective consciousness over time. Aren't "ongoing issues" basically "re-runs"?

//*Aside* The Irony: "Fittest" information = information that provides a positive impact to the most people. "Fittest" information represents the essence of morality and altruism. Ironically, the information that is becoming "extinct" is the information that is most crucial for our collective success, survival (Perhaps collective survival goes against the central tenets of natural selection?!). //

Complex concepts in science are often misunderstood because they are simplified and thought in terms of "linear causality" with a singular cause and effect, when in fact science often involves a complex system of causality that may be iterative, cyclical, and take place over time and space (Grotzer, 2012). According to Grotzer, we simplify causality due to our preference to attribute agency to conceptual understandings, our tendency to make cognitive heuristics (Tversky & Kahneman, 1982), and our limitations of our attention (Mack & Rock, 1998). Our visual perception is subject to natural tendencies of not only our attention, but also differences in the perception of visual images in our central vs. peripheral visual fields.

With a world of images, memes, and 350-character messages, we cannot help but be deterred from complex understandings of crucial political and scientific issues-- let alone an accurate and complete understanding of those issues. The non-immediacy of these issues means that they do not alert our attention or perceptual systems as would an elephant charging towards us. Rather, inattention and conscious ignorance of non-immediate, non-perceivable issues (radiation-contamination, global warming, GMOs, etc) all involve gains that are immediate and gratifying (fresh sashimi, convenience and laziness, cheap food, etc) and harms that are tacit. Even more troubling is the exploitation of our cognitive limitations and tendencies for harmful consequences. Sensory formats (visual/auditory advertisements, and even tastes) are now engineered to target sensory vulnerabilities while we overlook non-sensory information (global warming, obesity, risky decisions, any decision with a positive short term and negative long term outcome.

This is not necessarily a value judgment against the Breaking Bads, the Twitters and the Reddits of the information world. Rather, there is much to learn from these thriving models of information. There is a wealth of "fit" information intertwined with entertainment on these newer modes of information dissemination. If anything, perhaps we have to move past the "iron curtain" of network news, academic fluff, and the like. We are facing a communication gap, a failure of learning, and a reality that is increasingly at odds with traditional communication environments. If there is indeed an Information Darwinism underway, we cannot continue to beat the dead horse with "what used to work". It is our moral obligation to engage in our own pedagogical arms race against the changing information landscape in order to maximize information that yields the most physical, mental, social "fitness" for as many people as possible.

---------------------------------------

References

Amir, O., Biederman, I., & Hayworth, K. J. (2011). The neural basis for shape preferences. Vision research, 51(20), 2198-2206.

Biederman, I., & Vessel, E. (2006). Perceptual Pleasure and the Brain A novel theory explains why the brain craves information and seeks it through the senses. American Scientist, 3(94), 247-253.

Cosmides, L., & Tooby, J. (1992). Cognitive adaptations for social exchange.The adapted mind, 163-228.

Grossberg, S. (1987). Competitive learning: From interactive activation to adaptive resonance. Cognitive science, 11(1), 23-63.

Grotzer, T.A. (2012). Learning causality in a complex world. Lanham, MD: Rowman and Littlefield Education.

Jenkins, H., Ford, S., Green, J., & Green, J. B. (2012). Spreadable media: Creating value and meaning in a networked culture. NYU Press.

Made by Hand, Designed by Apple

This is yet another in a series of blog posts authored by the students in my PhD seminar on Public Intellectuals, being taught this term in USC's Annenberg School of Communication and Journalism. Made by Hand, Designed by Apple

by Andrew James Myers

 

Apple’s recent release of two new iPhone models — the iPhone 5s and 5c — was heralded with a pair of videos celebrating the aesthetics of each of the devices’ design and physical materials. The first, a 30-second spot entitled Plastic Perfected played at the 5c’s unveiling and aired on national TV, shows abstract swirls of liquid colors against a white background, gradually molding itself into the form of the iPhone 5c’s plastic shell. Other components, like the camera and the small screws, emerge spontaneously from within the molten plastic, until the idea of the iPhone is fully materialized, having literally created itself.

 

 

The other video, a companion piece also shown at the company’s iPhone presentation, depicts a mass of molten gold against a black background, swirling elegantly and weightlessly to sculpt itself into the iPhone 5s. Hovering components gradually descend into place, and the phone spins to present its finished form.

 

 

Over this past year, in my research of Apple’s marketing, I have watched hundreds of Apple’s ads and promotional videos extending back to the 1980s. For me, these most recent iPhone promotional videos were a surprising addition to this research, as they embody the purest and most potent distillation yet of a longstanding trend in Apple’s marketing. Apple’s marketing texts have long been preoccupied with constructing a certain aesthetic myth for the creation of Apple products. This mythical origin story at its essence taps into notions of vision, creativity, and genius while obscuring the devices’ real-world material origins as the product of concrete human labor.

 

Apple frequently releases “behind-the-scenes” promotional trailers for each of its major product launches. In Apple’s (widely-accepted) view of product creation, the valuable labor occurs in the realms of engineering, design, executive leadership, and software engineering. This is reflected in two significant patterns in the visual rhetoric of its behind-the-scenes videos: exclusive focus on automated robotic assembly processes, and animated visualizations of components spontaneously self-assembling against blank backgrounds. In the narrative framing constructed by these three rhetorical patterns, human labor at assembly factories like Foxconn is completely erased, written out of Apple’s corporate self-identity.

 

 

For example, consider the above making-of video for the iPhone 5c. The first visual pattern, exclusively showing automated labor rather than human labor, is always accompanied by a verbal discussion of manufacturing innovation. As we watch Macs and iPads being built, we almost never see a pair of human hands; in fact, I have been completely unable to find a single instance where worker hands — much less a full body or face — are shown in an Apple video made after 2008. Hands as a visual symbol and touching as a ritual are instead reserved for the consumer (“The fanatical care for how the iPhone 5c feels in your hand”), with frequent close-ups of disembodied hands touching, gripping, manipulating the product’s glossy material glory.

 

Second, Apple’s particular imagination of creation is manifest through its animated visualizations of how components fit together inherently and effortlessly. In one major type of these animations, components float in layers in the air, slowly and gracefully layering themselves into a snug assemblage. The molten-plastic and molten-metal ads discussed at the beginning of this post are merely the most recent (and visually extravagant) iteration of this aesthetic. Designing how components will fit together into ever-shrinking cases is essential to Apple’s hardware aesthetic obsession over making products as thin and small as possible. The designers’ work of putting the jigsaw puzzle together conceptually is seen as the real feat; actually putting it together, on the other hand, is trivial.

 

The visual rhetoric embedded in Apple’s videos clashes intensely with how Apple’s production process has recently been covered by journalists. Beginning in 2006 and climaxing in early 2012, the popular media has actively worked to raise awareness of the labor conditions of the individuals who work in the overseas factories producing Apple’s popular iPods, iPhones, iPads, and Macs (along with, secondarily, the electronics of almost every other major brand). This sensational story gained wide exposure by juxtaposing the brand mystique of Apple — perhaps the most meticulously and successfully branded company in the world — with a dystopian behind-the-scenes narrative completely at odds with Apple’s image. In response to this narrative in the Media, Apple has responded with a number of public relations initiatives, including a few  laudable measures that have genuinely improved supplier transparency and labor conditions. Yet, as labor violations in Apple’s supply chain continue to surface, and as Apple’s publicity materials continue to gloss over the human labor involved in product assembly, it is clear that much more needs to be done to address these issues.

 

A few weeks following two high-profile reports in the New York Times and NPR in early 2012, Apple responded to the negative publicity with a press release announcing that it would for the first time bring in a third-party organization, the Fair Labor Association, to independently audit its suppliers.[1] Apple also exclusively invited ABC news to visit the audit, yielding a 17-minute story broadcast on ABC’s television newsmagazine Nightline.

 

The Nightline piece offered the first journalistic footage from inside Foxconn’s assembly facility, and the pictures produced were astonishing. Reporter Bill Weir expresses surprise at the magnitude of manual labor he sees, repeatedly suggesting that simply seeing the factory process at work will cause viewers to “think different” about their Apple products. “I was expecting more automated assembly, more robots, but the sleek machines that dazzle and inspire... are mostly made by hand. After hand. After hand.” On Apple’s historical secrecy about its product manufacturing, Weir offers one interpretation. “If the world sees this line,” comments Weir over footage of a long, crowded assembly line, “it might change the way they think about this line.” Cut to a shot of a huge crowd of American consumers lined up to get inside a New York City Apple Store at a product launch.

 

What the Nightline piece lacks in the kinds of sensational details of other reports on Foxconn, it makes up for with the sheer visual impact of the startling images. We see exhausted workers collapsed asleep at their stations during meal breaks, the infamous suicide nets, the cramped 8-to-a-room dorms, and the apprehensive demeanor in the faces of prospective employees lining up outside the gates. The report even stages a moment in which the reporters visit a town and show an iPad to poor parents of Foxconn workers, none of whom have ever seen one.

 

After ABC’s first exclusive look inside Foxconn, other reporters were granted access to the factory, leading to a significant rise in video footage being broadcast and circulated online. More and more people were being exposed to the reality that iPads and iPhones are made by hand, by real humans struggling in almost dystopian conditions.

 

As I have researched and grappled with these issues, I have collected every relevant video I could find onto to my hard drive, which has over time become quite an exhaustive archive of Apple’s promotional material. At the same time, as I attempt to write about my research, I am frustrated at my incapability of fully conveying so many of the visual qualities of the videos I was analyzing in written form. My initial interest in the topic had sprung from an intangible, emotionally-entangled reaction I had to the aesthetic contrasts between Apple’s promotional videos and journalists’ Foxconn coverage — and I wondered whether it would be possible to make more impactful points through a visual essay rather than a written paper.

 

At first, I had in mind little more than a rather conventional expository documentary — nothing more than an illustrated lecture. But after taking Michael Renov’s fantastic seminar on documentary, I decided to try something a little more avant-garde. Inspired by documentary essayists such as Emile de Antonio, Jay Rosenblatt, Alan Berliner, Hollis Frampton, and Elida Shogt, I was interested in testing out these filmmakers’ innovative editing techniques for constructing original arguments by re-appropriating archival footage. I realized it might make a difficult and enlightening challenge to create a compilation documentary purely with archival footage — without voiceover, interviews, or text. I finished a 12-minute first cut of video essay this summer, and the result is below.

In contrast to the affordances of the written essay, one strength of the video medium that surfaced during editing was an ability to engage more directly with the kinetic and haptic experience of the body. In her essay “Political Mimesis,” Jane Gaines describes revolutionary documentary’s ability to work on the bodies of spectators, to move viewers to action. “I am thinking of scenes of rioting, images of bodies clashing, of bodies moving as a mass,” writes Gaines, suggesting that “images of sensual struggle” are a key element of a number of political documentaries. Gaines argues that certain depictions of on-screen bodies can produce in the audience similar bodily sensations or emotions, which inspired me to focus in my video essay on the concrete bodily attributes of sweatshop labor.

 

Gaines’s article brought me to formulate the central recurring visual motif of the film: a montage of close-up hand movements. I wanted to illustrate the corporeal vocabulary through which American consumers define their interaction with technology (moving and clicking the mouse, gesturing on a trackpad, tapping and swiping on a tablet), and offer in contrast the bodily relationship factory line-workers have to those same devices: repetitive, slight, monotonous movements.

 

As mentioned previously, the human bodies of workers — even their hands — are conspicuously absent from the footage Apple uses in their promotional videos about the making of their products. I tried to draw attention to this gaping corporeal absence with an extended montage segment of these fully-automated factory processes played simultaneously over an audio track explicitly addressing the harsh conditions for the factory workers we’re not seeing. I hoped that by explicitly cultivating a sense of mimetic identification throughout the rest of the film, the sequences of hands-free assembly would stand out as somewhat ghastly and unnerving.

 

Whether this film is successful in communicating its analysis is for others to decide; for me, I both enjoyed the novel experience of making it and feel like the video editing process forced me to think about the material I was working with in new ways. Focusing on making an argument through juxtaposition pushed me to look new contrasts and valences between bits of material I had not noticed before, to consider formal elements like timing and word choice with a new level of scrutiny, and to see my potential output as a researcher and advocate as perhaps not limited strictly to writing books and articles.

Andrew James Myers is a Ph.D. student in Critical Studies at the University of Southern California, and holds an M.A. in Cinema and Media Studies from UCLA. He is post-processing editor for the Media History Digital Library, and assisted in the creation of Lantern, an online search tool for archival media history. A former co-editor-in-chief of Mediascape, his research interests include media industries and production culture, archival film and television history, new media, and documentary.

 

 


[1] Apple Computer, Inc., "Press Release: Fair Labor Association Begins Inspections of Foxconn," (2012), http://www.apple.com/pr/library/2012/02/13Fair-Labor-Association-Begins-Inspections-of-Foxconn.html.

Mules, Trojan Horses, Dragons, Princesses, and Flies

The following is a post written by one of the students in my PhD seminar on Public Intellectuals being taught this semester at the USC Annenberg School of Communication and Journalism.

Mules, Trojan Horses, Dragons, Princesses, and Flies

by Addison Shockley 

Shortly after I arrived at the University of Southern California, to begin working on my doctorate in communication, there was a banquet to welcome the new group of communication and journalism graduate students. There were round tables, and people sat where they liked. At one point, the Dean of the Annenberg School of Communication and Journalism came up to the table I was sitting at and asked us about our table and what kind of students we were, whether we were journalism students or communication students or a mixture of the two. I spoke up for our table and said, “We all happen to be communication students,” and then added, “It was natural that we all sat together.” He replied, “Just because something is natural doesn’t mean it’s good.” Then he had to walk away because he was being called to give the opening speech, and I sat there and felt a little bit foolish.

He was right, though. Natural is not necessarily good. It’s natural for dragons to take princesses captive in their lairs, but it’s not good for princesses to lose their freedom. It’s natural for flies to be drawn to the light, but it’s not good for flies themselves to be electrocuted when the light’s a bug-zapper. Dean Ernest Wilson believed this principle of distinguishing between what’s natural and what’s good—that sometimes they’re the same and sometimes they’re not—and I believe it too.

This story is a small part of a larger story, the story of my journey of becoming who I am today. When I began my undergraduate education in 2005, I would never have been able to predict that I would be doing a Ph.D. today and examining ideas like rhetoric and the tragedies caused by misunderstanding. When I graduate in a few years, I hope to teach rhetoric in a university and share insights about communication and (mis)understanding with my students, as well as the general public.

It is a commonplace to assert that we are living through a communication revolution, and of course people are studying the ways in which our lives and communication practices are being revolutionized by new technology and new media. This is important work, but I prefer to focus on what I call the “foundational” issues in communication, questions related to what human beings are, and why they should communicate with others in the first place; questions about communicating what we know, and how we know it; questions about values, and how we communicate in line with them, about them; and questions about what it means to communicate purposefully and wisely. These are the questions I believe need to be addressed today alongside the more timely questions about technology, new media, and the ways our world is being transformed by them.

None of our experiences are wasted, or so my mother tells me (and I think she’s right—at least, they don’t have to be wasted). In this post, I share some of my personal history to discuss how it relates to who I have become, and how it has shaped my perspective on a set of real world problems that I’ll share with you.

********************************************

Like many college students, I changed my major multiple times, uncertain what I wanted to do with my life. I began in the fall of 2005, studying film as a freshman at Azusa Pacific University, which only lasted a year. I decided to return home to Kansas City, Missouri, having realized the film industry was harder to break into than I thought it was, as well as less appealing than I had imagined it to be in my mind, and I began considering what to do next. I took a year off, so to speak, talking with friends, exploring options, and finally made the decision to transfer to the University of Central Missouri—forty minutes east of my hometown of Lee’s Summit, Missouri—to begin classes in the fall of 2007 as a mule (the school’s odd choice of a mascot), majoring in—get this—Construction Management.

My dad had been a construction manager at one point, and he really liked it, so I thought I’d give it a shot. I began taking classes like “Mechanical Systems of Buildings,” talking about beams and pneumatic nail guns, and wondering why anyone would want their mascot to be a mule. (Okay, secretly, I kind of liked it. Mules are humble, but confident). Three semesters later, with an internship under my belt—shadowing construction managers who built mostly fast-food restaurants and office buildings—I realized this kind of work didn’t appeal to me anymore, at least not enough to make it my bread and butter.

Studying construction had taught me some important things, but I knew it was time for me to move on. I didn’t feel like I was getting the hang of it from the classes, and it seemed I wasn’t a natural at it—not even close. Evidence of that includes my “internship boss” yelling at me the last week of my internship and telling me I had been a failure. He was having a bad day. There was some truth to it, though. I lacked the devotion, preparation, guidance and giftedness to do a good job, simply put. Rather than wanting to read blueprints, I wanted to read novels; rather than daydreaming about building, I wanted to use words to make a difference in society. I had taken enough classes to get a minor in Construction Management, and it taught me how to think about the world concretely, though I was not built for it.

Soon after I lost interest—for good—in construction as a career, I discovered “rhetoric.” The fact of the matter is, I took my first course in rhetoric during my senior year of college, but almost immediately, I knew this was “it.” For the sake of clarifying what I would want you to imagine when I say “rhetoric,” replace whatever comes to mind with this definition of rhetoric from a famous twentieth century scholar of rhetoric, I.A. Richards (from his book The Philosophy of Rhetoric), who defined rhetoric as “the study of misunderstanding and its remedies.” To have a “rhetorical” sort of imagination is to be someone capable of pinpointing instances of misunderstanding and then to know how to work on them in order to undo them.

In rhetoric, as a rather marginalized academic subject these days, I found something I cared about. I liked it so much that I decided I would do two more years of Master’s level coursework mostly in theories of rhetoric at the same school—remaining a humble, yet confident mule. I enjoyed this experience very much and began to see myself studying rhetoric at the doctorate level.

I applied to doctorate programs during the fall of my second year into the Master’s level coursework, and I got accepted at USC in the spring. My wife and I moved to Los Angeles in the following fall to begin the next phase of our lives. I left the mule and started riding a Trojan horse, so to speak.

A few years prior I had left a career in bricks-and-mortar construction—well, a potential career in construction by abandoning my major in Construction Management. I was going to pursue a career in “words-and-ideas” construction—not physical construction, but cultural or, as it is sometimes called, “social construction.”

 

Image A-Hyatt Regency Walkway Collapse 1981 (1) It should be clear from events like the historic Hyatt-Regency walkway collapse in Kansas City, Missouri, or from the recent five-story building collapse in Mumbai that construction is risky.

Image B-Mumbai five story building collapse photo 2013

 

Misconstruction can be fatal, whether we are dealing with a physical structure that is literally misconstructed, or figuratively with a faulty idea or mistaken assumption that is used as the basis for further thinking. And it may be an obvious side note, but miscommunication can cause misconstruction, as in when blueprints (poorly designed) are used to communicate building procedures that result in faulty structures.

Today, I tell people that I study misunderstanding and how it messes things up, royally. I am convinced it happens all the time, all around us, sadly without the notice of enough people. Let me explain this phenomenon of “social misconstruction,” and then give some examples of it.

****************************************

Rhetoricians care about misunderstanding, which results in the social misconstruction of reality. Wait, social what?

When I say the “social construction of reality,” I’m using a phrase that most academics within the humanities and social sciences have heard of—which can refer to the idea, essentially, that reality isn’t really real, truth isn’t truly true, right isn’t actually righteous, et cetera. Countless interactions among persons in societies result in commonly held assumptions about what exists and how we should respond to it, and these commonly held assumptions create the illusion that there is a single reality because most people seem to agree that, well, this is just the way it is: so it must be so.

There are other versions of this idea of social constructionism, some of which allow that an objective reality exists—and those versions are more convincing to me. But according to this radical version of social constructionism, people “construct” reality through interactions with other members of their culture or society (or world), and in this sense, they “make it up” using language. Rather than discovering reality, and disclosing it with language, they do the reverse, creating reality with language. I agree that ideas of reality are rooted in communities. But I don’t believe that all communities are in touch with reality. I believe that people can’t contain absolute reality in a box or in a system of ideas, but I believe that it exists, despite our limitations in apprehending it fully.

This is obviously a deep subject, one we could read libraries full of books about. I hold the minority perspective—the realist view that, on the one hand, there is reality, and on the other there is unreality. There’s true, and there’s false. Another term that is often used alongside the phrase “the social construction of reality” is the phrase “intersubjective agreement,” which refers to agreements made about what’s what in life, what things mean and don’t mean, what reality is. This term suggests that reality is nothing more than what we agree upon that it is.

But there’s a problem with this idea. Persons under arrest are either guilty or innocent of their alleged offense. I believe in the valuable work of socially constructing one true reality, and in the wasted time spent constructing unrealities. We don’t create reality; we bump into it. Even if we aren’t sure what we’re bumping into, it’s got a personality, and we better learn it. It’s got rules. And it’s got rewards. As one of my friends said, if you break the rules of the universe, they will break you. Denying reality is a slippery slope to some bad back pain.

Most rhetoricians these days don’t believe in “real reality.” They probably wouldn’t tell people they’re interested in misunderstanding, as I do; they’d probably say they’re interested in multiple understandings, and they would probably be uncomfortable with any claims about misunderstanding (after all, can someone be said to misunderstand a world that isn’t real?).

I believe in the metaphysical parallel to physical blindness: the eyes of minds can be distorted in vision, and effectively blind to what is actually happening. Rhetoricians like me care about social misconstruction and misunderstanding, but also about the related issues of miseducation, miscommunication, misassociation, misinformation, disinformation and deceit.

Richard Weaver, another famous rhetorician, taught that “ideas have consequences”—and Kenneth Burke—perhaps the most famous modern writer on rhetoric—taught that words imply attitudes, which suggest actions. For example, naming someone as an “enemy” implies an attitude toward them that encourages certain actions and discourages others, whereas calling them a “friend” would suggest a different attitude, and thus, different actions.

Just to give a few examples of social misconstruction, we can think for a moment about misassociation. We need to cultivate discernment among our citizens so that they can disassociate what has been misassociated, because misassociating things can disserve and harm people in serious ways.

I was eating dinner with some friends the other night, and they shared some disturbing facts with me. One of them works in Uganda, and she told me that social misconstructions of class have led to the malnourishment of children in Uganda because their society has constructed an association of eating vegetables with being poor. Their parents don’t want to feed their children what might be thought of as “poor people’s food.” My other friend from India, who was dining with us, chimed in and added that in India, white rice is associated with a higher-class diet than brown rice (even though brown rice is healthier). These examples, although limited to food, show how any society can contain “misconstructed” meanings and associations, which contribute to the “breaking down” of lives.

To give another example, this time from the United States, we can briefly consider the work of communication scholar George Gerbner to help us see how in the United States, where the average person watches more than four hours of television per day (see footnote for reference), the cumulative effect over time is that representations in television begin to cultivate misperceptions in Americans of social realitFor instance, persons who watch crime shows like Law & Order, or CSI, or Criminal Minds, may perceive that social reality closely matches the depictions in the shows themselves. To give an example of how social misconstructions can emerge from long-term exposure to such shows, consider how CSI, for example, consistently uses unrealistic depictions of the technology available to death investigators.  Or consider how Law & Order messed with popular perceptions of what constituted an adequate quantity of evidence to convict someone of an alleged offense, resulting in U.S. jury trials in which jury members required overwhelming amounts of evidence to be comfortable deciding that a defendant was guilty.

These minor examples address the ways in which, taken together, instances of media content consumed over a long period of time can influence people’s perceptions such that they misassociate, again, say, guiltiness only with overwhelmingly unusual amounts of evidence—more than is typically needed to establish a high enough probability of guilt to declare a person guilty.

Misassociation can take many other forms than this, of course, and much is lost due to mistakes of association. It is the job of the rhetorician to spot them, and zap them with his rhe-gun. We don’t want anyone to keep misassociating what is natural with what is good. (And for further examples of social misconstructions, see the work of Richard Hamilton, who tries to show the mistakenness of a few widely held views in academia: http://www.amazon.com/The-Social-Misconstruction-Reality-Verification/dp/0300063458).

Addison Shockley is a doctoral student studying rhetoric, media, and ethics at the USC Annenberg School for Communication. One of his guiding assumptions is that miscommunication, rooted in deception and/or misunderstanding, causes devastating results, and he is motivated in all of his researching, writing, and teaching by the idea of straightening out socially misconstructed realities. He's recently started blogging at www.wordscuff.com.  

Revisiting Neo-Soul

The following is another in an ongoing series of blog posts from the remarkable students in my Public Intellectuals class. We would welcome any comments or suggestions you might have. Revisiting Neo-Soul

by Marcus Shepard

Popular music blog Singersroom recently asked an interesting question “Will Alternative R&B fade away...like Neo-Soul & Hip-Hop Soul?” What’s interesting about this question for me is that neo-soul and some would argue hip-hop soul, never truly found a definition or a sonic boundary to differentiate them from other genres of music during their rise in the 1990s. Different posts could and should be written on hip-hop-soul and even the validity of the term alternative R&B as it appears to be a term used to describe white R&B artists who make music similar to the likes of the Black R&B artists such as the late Aaliyah, Brandy and Monica. I want to focus though on exploring the genre neo-soul for a moment. It’s important to engage with neo-soul because a lot of people believe that this musical discourse has either faded away or was a flash pan marketing nomer that lost steam as we rolled into the new millennium, which is not the case.

While fans of the music labeled neo-soul can often identify those songs, albums and/or artists that falls under the genre label, due to the lack of a potential concrete definition, the once burgeoning genre has become harder to define. As different sonic discourses continue to mix and create unique sounds, defining and creating boundaries for what is “neo-soul” and what is not might become increasingly more difficult. Though the label neo-soul has come under scrutiny from artists, musicians, fans, critics, and academics about its validity as a term/genre, others have gravitated towards the usage of the term. The confusion of what neo-soul is adds to the debate surrounding this genre. Defining neo-soul is not to exclude artists who are on the periphery or crossover into the genre, but to give space to those singers and musicians who are entrenched within the discourse of the music.

With the introductory track “Let It Be,” Jill Scott also expresses her frustrations with being labeled and defined as a neo-soul artist. Though Scott does not openly state her anguish with her categorization in the genre, she states an all too familiar cry of artists who simply want to make music devoid of classification.

What do I do If its Hip Hop if its bebop reggaeton of the metronome In your car in your dorm nice and warm whatever form If Classical Country Mood Rhythm & Blues Gospel Whatever it is, Let It Be Let It Be Whatever it is Whatever it is Let It If it's deeper soul If It's Rock & Roll Spiritual Factual Beautiful Political Somethin to Roll To Let It Be, Whatever it is, Whatever it is Let It Be Let It Be Whatever it is Let It Be Let It Be, Let It Why do do do I,I,I,I Feel trapped inside a box when just don't fit into it

Through her sorrow of being defined and “trapped inside a box,” Scott has also excavated what neo-soul is. Though she and other artists often have fraught relationships with the term, understanding it as a convergence of sonic discourses within the soul and rap musical traditions opens up a variety of sonic avenues that Scott, her peers, and predecessors have pursued. Scott, who often transcends the boundaries of different genres, is able to rise above these very boundaries due to the essence of neo-soul. This genre allows her to waft into the vocal atmosphere with operatic vibrato during the closing number (“He Loves Me (Lyzel In E Flat)”) of her concert, just as it allows for her and Doug E. Fresh to beatbox during their collaborative “All Cried Out Redux”.

Being situated on the periphery of two musical legacies, soul and hip-hop, allows for artists within this convergence to coif a variety of sonic discourses that draw on the technical, sonic, and lyrical innovations of both genres. The jazz, gospel, and blues influences of soul, which in their own right offer rich musical and lyrical histories, further add to the wide-range of sonic possibilities that artists within neo-soul can tap. Include the plentiful sonic and lyrical options that hip-hop has to offer and the neo-soul genre seems to have boundless opportunity to grow and coalesce.

So what does neo-soul sound like? Neo-soul is a genre that is an amalgamation of rap and soul music, which relies on the technological advances made during the genesis of rap, but at the same time readily uses live instrumentation of the soul era. Neo-soul builds upon sampling through its own reinterpretations of soul records such as D’Angelo’s take on Smokey Robinson’s “Cruisin',” or Lauryn Hill’s Frankie Valli cover of “Can't Take My Eyes Off You”. Though these two are traditional covers, Hill and D’Angelo infect a hip-hop backbeat that is heavily pronounced throughout most neo-soul records. Hill’s cover of “Can’t Take My Eyes Off You” in particular opens up with beatboxing, which then leads into the sonic composition of the song and melds into a traditional rap backbeat. Though covers are not unique to this genre, when they occur within the confines of neo-soul, the musical composition of the song is often tweaked/reworked to reflect the sonic collaborations that define the genre.

Though possibilities seem endless for neo-soul, there is a distinct sonic quality that exists within the genre. Neo-soul is deeply rooted in live instrumentation and referencing the liner notes of the majority of neo-soul releases showcases the inclusion of studio musicians. Though synthesizing and sampling is present within neo-soul, it is building a legacy that is rooted in both the live instrumentation of soul music and the technological manufactured sounds of hip-hop.

Striking this sonic balance is one of the challenges the genre faces and artists who opt for a more live sound or a more synthesized sound find themselves closer to the periphery of the genres that represent this sound. As Erykah Badu famously proclaims, she “is an analog girl in a digital world” and striking the balance between these two worlds is what artists who are steeped in the musical traditions of neo-soul are all about.

In addition to the sonic quality and components of neo-soul, the genre is also one that is carried by the lyrical compositions of its artists. While hip-hop soul is a famous fusion of music that is sometimes conflated with neo-soul, artists within the confines of this music often set themselves apart from neo-soul artists due to the paucity of songwriting credits on their résumé as well as the abundance of synthesized beats. This is not to say that one genre is “better” than the other, but that they each exist in different spheres and planes of similar musical discourse.

Neo-soul artists, as Jill Scott so eloquently pointed out, speak to the realities of life within their self- or co-written compositions including, but not limited to, issues that touch upon the very essence of human experience. While rap music still speaks to lived experiences, the overarching narrative seems to have shifted to a paradigm that speaks largely to male street credibility and the highly commercialized and commoditized male hustler protagonist.

Neo-soul can be seen, lyrically, as a remix of hip-hop – still speaking to the lived experiences of its listeners as hip-hop and soul did and still do – with a slanted female perspective, as the majority of releases within the confines of neo-soul reflect the female voice.

While men and women release music within the confines of the neo-soul and rap genres respectively, it appears that each genre has the disproportionate voice of one sex. Whereas rap music has always been rooted in the male perspective, with relatively few female centered perspectives, neo-soul operates as the contrasting version with the female perspective taking center stage, while the male perspective is given voice with relatively few releases.

Responding to the obvious exclusion of female voices within hip-hop, neo-soul artists find themselves oftentimes engaging with messages perpetuated within hip-hop and mass media in an attempt to recreate and reclaim those representations of Black womanhood.

Another interesting observation of the discography released within neo-soul finds that Black artists have released the majority, if not all, of the releases considered neo-soul. Though white British soul artists such as Amy Winehouse, Joss Stone, Adele, and Duffy have all released albums and/or songs that would aptly be described as neo-soul, due to their sonic and lyrical arrangements, these women are placed under the banner of pop or British soul instead of neo-soul. Jaguar Wright for one has pointed out in her observation of the musical genre the racialized space that has been built around this marketing genre.

Through the racialization of neo-soul, these artists are able to engage in visual and musical critiques of issues impacting Black communities, such as Jill Scott’s powerful analysis of the state of Black communities in her song “My Petition,” which is lifted from her 2004 album Beautifully Human: Words and Sounds Vol. 2.

Ultimately, neo-soul is a genre that is still alive and well though the glare of mainstream press and platinum selling singles and album sales has wavered. Before one engages with the theorizing of “alternative R&B,” it is important to revisit and reengage with the visual and musical discourse that is the genre neo-soul.

Marcus C. Shepard is a Ph.D. student at USC Annenberg School for Communication and Journalism. His work explores Black musical performance and its intersections and transformative capabilities of race, class, gender and sexuality. Specifically, he focuses on the musical genre neo-soul and its sonic, visual and political implications in the United States within communities of color. Shepard has also worked at the world famous Apollo Theater in Harlem as an archivist and maintains his ties to this artistic community.

Solidarity Might be for White Women, but it isn't for Feminists

Solidarity Might be for White Women, but it isn’t for Feminists

                                              By Nikita Hamilton

 

In early August, the hashtag #SolidarityIsForWhiteWomen sparked an internet-wide conversation about feminism, intersectionality and inclusion after Mikki Kendal coined the term in her response to tweets that were to and from Hugo Schwyzer, a professor at Pasadena City College. Schwyzer had just gone on an hour-long Twitter rant in which he admitted to leaving women of color out of feminism, and later apologized for it. He then received sympathetic Twitter responses that moved Kendal to tweet “#SolidarityIsForWhiteWomen when the mental heath & future prospects for @hugoschwyzer are more important than the damage he did.” She felt that women of color were, and are, continuously left out of feminism and that Schwyzer was another example of that exclusion.

Though this is a necessary discussion, what is most interesting about it is that it’s a conversation that started decades ago and has just never come to a resolution. The inclusion of women of color has been an issue from the very beginnings of first-wave feminism and we are simply at another iteration of the same discussion. When white middle-class women wanted to fight for the right to go out into a workforce that Black, Asian, and Hispanic women had already been a part of for decades, if not hundreds of years, they all realized that there would be a continued disconnect. How could there not be when some of these women came from generations of working and slaving women or generations of woman that had been working side by side with the men of their race?

In a recent NPR Code Switch article, Linsay Yoo asked about which women were included in the term “women of color,” and advocated for the inclusion of Asian and Hispanic women. Her inquiries and points made sense since Asian and Hispanic women are also marginalized and often left out of feminism. However, in addition to noticing the continued omission of Arab women from the term “women of color” by each other these commentators, I was left with the question, “what do people mean when they say that they want solidarity?” Furthermore, what would this solidarity look like and what are its desired consequences? I believe that this is the question that feminists are really failing to answer.

Mikki Kendall wrote, “Solidarity is a small word for a broad concept; sharing one aspect of our identity with someone else doesn't mean we'll have the same goals, or even the same ideas of community.” Kendall’s definition sends feminists in the direction that they need to go undoubtedly, but the word “solidarity” itself is the problem. Solidarity implies equality and that is not present in the feminist movement or society at large. We live in world that stratifies people by their gender, race, sexuality and class. It is quite possible that expectation of equality that comes from a word such as “solidarity” that is the snowball, which then turns into an avalanche of problems and disagreements. Therefore, it is time to find another label and it is time to have a very honest conversation among all feminists, both those who feel included and excluded from the movement, about how structural inequalities based upon on color, sexuality and socioeconomic status have to be taken into consideration along with gendered issues.

Of course there are some key issues that are affecting all women because they are biologically female. The attack on women’s bodies by the government, women’s healthcare, violence and sexual assault are all topics that feminists can agree need to be at the forefront of the women’s movement. However, depending on the race, for example, the order of those topics importance shifts. For example, the 2000 US Department of Justice survey on intimate partner violence uncovered that inter-partner violence was particularly salient for Hispanic women because they “were significantly more likely than non-Hispanic women to report that they were raped by a current or former intimate partner at some time in their lifetime.” For black women, sexual assault is a leading issue.  According to the Rape, Abuse and Incest National Network (RAINN), the lifetime rate of rape and attempted rape for all women is 17.6% while it is 18.8% for black women specifically. There can be consensus on what the issues are, but there also needs to be acceptance of differences, inequalities and the desire for differing prioritizations. Why can’t feminism be a movement of consensus on the overarching issues that affect women that also houses Third World and black feminists’ respective prioritized concerns? Why can’t each group be a wall under the roof of feminism that provides support, but consists of different activities in each room of the house?

The Women’s Movement is still needed, but as history has exemplified over and over again solidarity is not what can, or needs to be, achieved presently. Solidarity is defined as a “community of feelings, purposes, etc.,” and the idea of “community” connotes an equality that is not yet present among all of the women of the feminism. A better word may be “consensus,” which means “majority of opinion” or “general agreement” because feminists can all agree that there are some overarching feminist issues. Either way, the point is that we set ourselves up for failure every time we sit at the table and come to realize that once the initial layer of women’s issues is peeled back there are too many differences left bare and unacknowledged in the name of a non-existent “solidarity.” Solidarity IS for white women, and for black women, and for Asian women, and for Hispanic women and for Arab women. Consensus is for feminists. Let’s finally move forward.

Nikita Hamilton is a doctoral student at USC's Annenberg School for Communication and Journalism. Her research interests include gender, race, stereotypes, feminism, film and popular culture.

The Other Media Revolution

This is another in the series of posts from students in my PhD level seminar on the Public Intellectual, which I am teaching this term through the USC Annenberg School of Communication and Journalism.   

The Other Media Revolution

by Mark Hannah

 

I’ve long blogged about the so-called “digital media revolution.”  Yet, deploying digital media to praise digital media has always struck me as a bit self-congratulatory.  Socrates, in the Gorgias dialogues, accuses orators of flattering their audiences in order to persuade them.  This may be the effect, even if it’s not the intention, of blogging enthusiastically about blogging.

To be sure, a meaningful and consequential revolution of our media universe is underway.  This revolution’s technological front has been well chronicled and analyzed (and is represented) by this blog and others like it.  The revolution’s economic front – specifically, the global transformation of media systems from statist to capitalist models – has, I think, been critically underappreciated.

 

What Sprung the Arab Spring?

How attributable is the Arab Spring to Twitter and Facebook, really?  After a wave of news commentary and academic research that have back-patted western social media companies, some observers now question how much credit digital media truly deserve for engendering social movements.  It’s undeniable that the Internet does, in fact, provide a relatively autonomous space for interaction and mobilization, and that revolutionary ideas have a new vehicle for diffusing throughout a population.  But the salience of these revolutionary ideas may have its origin in other media that are more prevalent in the daily life of ordinary Arab citizens.

With limited Internet access but high satellite TV penetration throughout much of the Arab world, the proliferation of privately owned television networks may, in fact, have been more responsible for creating the kind of cosmopolitan attitudes and democratic mindset that were foundational for popular uprisings in that region.

Authoritarian regimes are sensitive to this phenomenon and, as my colleague Philip Seib points out, Hosni Mubarak responded to protests early on in the Egyptian revolution by pulling the plug on private broadcasters like ON-TV and Dream-TV, preventing them from airing their regular broadcasts.  Of the more than 500 satellite TV channels in the region (more than two-thirds of which are now privately owned!), Al-Jazeera and Al-Arabiya are two news networks that have redefined Middle Eastern journalim and enjoy broad, pan-Arab influence.

The Internet, which represents technological progress and individual interaction, may have emerged as a powerful symbol of democratic protests in the Arab world even while “old media,” with their new (relative) independence from government coercion may be more responsible for planting the seeds of those protests.

 

America Online? Cultural Exchange On and Off the Web

Is YouTube really exporting American culture abroad?  The prevailing wisdom, fueled by a mix of empirical research and a culture of enthusiasm for digital media, is that the global nature of the Web has opened up creative content for sharing with new international audiences.  Yet, in light of restrictive censorship laws and media consumers’ homophilic tendencies, we may be overstating the broad multicultural exchange that has resulted.

What has signficantly increased the influence of American cultural products, however, is the liberalization of entertainment markets internationally.  As international trade barriers loosen, Hollywood films are pouring into foreign countries.  Just last year, China relaxed its restrictions on imported films, now allowing twenty imported films per year (most of which come from the United States).  This freer trade model, combined with the dramatic expansion of the movie theater market in China (American film studios can expect to generate $20 - $40 million per film these days, as opposed to $1 million per film ten years ago) is a boon for America’s cross-cultural influence in China.

It’s true that rampant piracy, enabled by digital technologies, further increases the reach and influence of American movies and music.  To the extent that the demand for pirated cultural products may be driven by the promotional activity of film studios or record labels, this practice may be seen more as an (illegal) extension of new international trade activity than as a natural extension of any multicultural exchange occuring online.

The cultural influence of trade doesn’t just move in one direction though.  As Michael Lynton, CEO of Sony Pictures Entertainment, insisted in a Wall Street Journal op-ed, economic globalization is as much responsible for bringing other cultures to Hollywood as it is for bringing Hollywood to other cultures.

Put otherwise, media systems are both the cause and the effect of culture.

 

The Cycle of Cultural Production & Consumption

To use a concept from sociology, media are performative. They enable new social solidarities, create new constituencies and, in some cases, even redefine political participation.  Nothing sows the idea of political dissent like the spectacle of an opposition leader publicly criticizing the a country’s leader on an independent television channel.  And, on some level, nothing creates a sense of individual economic agency like widespread television advertisements for Adidas and Nike sneakers, competing for the viewer’s preference.

Sociologists also discuss the “embeddedness” of markets within social and political contexts. From this angle, the proliferation of commercial broadcasters and media liberalization are enabled by the kind of social and political progress that they, in turn, spur.

Despite the above examples of how the media universe’s new economic models are transforming public opinion and cultural identity, we remain transfixed on the new technological models, the digital media revolution.  It’s perhaps understandable that reports of deregulation and trade agreements often take a back seat to the more trendy tales of the Internet’s global impact. The Internet is, after all, a uniform and universal medium and the causes and consequences of its introduction to different parts of the world are easily imagined.

In contrast, the increased privatization of media, while a global phenomenon, is constituted differently in different national contexts.  The private ownership of newspapers in the formerly Communist countries of Eastern Europe looks different than the multinational conglomerates that own television channels in Latin America.  Like globalization itself, this global phenomenon is being expressed in variegated and culturally situated ways.

Finally, the story of this “other” media revolution is also a bit counterintuitive to an American audience, which readily identifies the Internet as an empowering and democratizing medium, but has a different experience domestically with the commercialization of news journalism.  We haven’t confronted an autocratic state-run media environment and our commercial media don’t always live up to the high ideals of American journalism.  To a country like ours, which has grown accustomed to an independent press, it’s not always easy to see, as our founders once did, the potential of a free market of ideas (and creative content) as a foundation for independent thought, democratic participation, and cultural identity.

 

Mark Hannah is a doctoral student at USC's Annenberg School for Communication, where he studies the political and cultural consequences of the transformation of media systems internationally. A former political correspondent for PBS's MediaShift blog, Mark has been a staffer on two presidential campaigns and a digital media strategist at Edelman PR.

Non-Conforming Americans: Genre, Race, and Creativity in Popular Music

This is another in a series of posts by the PhD students in the Public Intellectuals seminar I am teaching through the Annenberg School of Communication and Journalism.  

Non-conforming Americans: Genre, Race, and Creativity in Popular Music by Rebecca Johnson

Papa bear and mama bear. One was Black, and one was white, and from day one, they filled our lower eastside Manhattan apartment with sound. Beautiful sounds, organized and arranged, and from song to song they would change in shape, tempo, rhythm, amplitude, voice, instrument and narrative. Sometimes those sounds would come from my father’s guitar playing in the living room, and sometimes, even simultaneously, from my bedroom, where I could often be found recording a newly penned song. But most of the time, those sounds came from speakers.

Sometimes my father would put on this:

And the next day he’d play this:

Sometimes my mother would put on this:

And the next day she would play this:

And sometimes, they would both put on this:

The wide array of sounds that I heard was not just limited to the inner walls of my downtown apartment though. They echoed as I left home every morning to take the train to my upper eastside magnet school, and as I entered that school every day and saw diverse colors in the faces of my classmates, and when I visited my father’s parents in Brooklyn, and then my mother’s parents on Long Island. The sounds I heard became interwoven in my identity, song by song, strand by strand, they became my musical DNA. As I got older, I learned how those sounds came together, replicated, and mutated in the history of a world much bigger than myself. I discovered how those sounds had changed over time, often through deletions and erasures, but also how they had evolved because of the insertion of something new, something extra.

The sounds I grew up with were all part of the history of popular music in America. Crucial to the trajectory of that history has been the interactions between African Americans and white Americans. The complicated relationship between these two collectivities has informed much of the way we as a society understand how music helps to shape and influence our identities, and how we understand and perceive different styles, or genres, of music themselves. This post aims to explore how these understandings were formed over time, and how they can be reimagined, and challenged, in the digital age.

Popular music in America from its start was not just about sound, but racialized sound. From the moment African sounds were forcibly brought to America in the hands and mouths and hearts and minds of Black slaves, they were marked as different and in opposition to the European sounds that were already at work in forming the new nation. And yet through music slaves found a positive means of expression, identification and resistance where otherwise they were denied. While they would eventually be freed, the mark of slavery never left the African American sound and would be used to label Black music as different and not quite equal. As Karl Hagstrom Miller writes in Segregating Sound, in the 1880s music began to move from something we enjoyed as culture to something we also packaged, marketed, and sold as a business. It was then that “a color line,” or “audio-racial imagination” as music scholar Josh Kun calls it, would become deeply ingrained in much of our understanding of sound and identity (of course, that line had long existed, but it was intensified and became part of the backbone the industry was built on).

The color line that still runs through music today was in part cemented through the establishment of genres. The function of genre is to categorize and define music, creating boundaries for what different styles of music should and should not sound like, as well as dictating who should be playing and listening to certain types of music and who should not (for example, based on race, gender, class). The essential word here is “should.”

The racial segregation at the time the music business was taking baby steps played a large role in genre segregation. The separate selling of white and Black “race” records by music companies in the early 1900s assumed a link between taste and (racial) identity. Recorded music meant that listeners could not always tell if the artist they were hearing was white or Black, and thus it became the job of the music and its marketing to do so. Genres of music are living and constantly evolving organisms that feed off of input from musicians, listeners, scholars and key industry players such as music publishers and record labels. The contemporary use of them is a result of this early segmentation.

A selective, condensed timeline of genre segregation goes something like this. In the early twentieth century many white and Black musicians in the South played the same hillbilly music (including together), which combined the styles of “blues, jazz, old-time fiddle music, and Tin Pan Alley pop.” During this period the best way for musicians to make money was often to be able to perform music that could appeal to both Black and white listeners. Yet hillbilly grew into Country music, a genre first and foremost defined by its whiteness and the way that it helps to construct the idea of what being “white” means. Jump to the age of rock ‘n’ roll, and you find the contributions of Black musicians being appropriated (borrowed, or stolen) and overshadowed by white musicians. If we fast-forward once again to the 1980’s, Black DJs in Chicago and Detroit were creating and playing styles such as house and techno, while white DJs in the U.S. and across the pond in the United Kingdom were simultaneously contributing to the development of electronic music. Yet today, the overarching genre of electronic music, and its numerous subgenres, is commonly known to be a style of music created and played by white DJs.

The whiteness that came to define many genres meant to a degree erasing the blackness that also existed in them. Styles such as country and hip-hop have become important markers of social identity and cultural practices far beyond mere sound. At the same time, the rules that have come to define many genres have erased the hybrid origins of much of American popular music. They erase the fact that Blacks and whites, for better or worse, have lived, created and shared music side by side. And, they have created identities side by side, in response and through interactions with one another.

But, what if your identity as a listener or musician doesn’t fit into any of categories provided? What if the music you like, and the identity you’re constantly in the process of forming, goes something like this:

Unlocking The Truth - Malcolm Brickhouse & Jarad Dawkins from The Avant/Garde Diaries on Vimeo.

What if you are not as confident as Malcolm and Jarad, and you are faced with a world telling you that your interests lay in the wrong place, because you are the wrong race? What if you’re like me, with skin and an identity created by the bonding of two different worlds, but you are bombarded with messages that tell you life is either about picking one, or figuring out how to navigate between the two?

Popular music is a space in which identities are imagined, formed, represented and contested. Racialized genre categories that function within the commercial market have not only restricted the ability of musicians to freely and creatively make (and sell) music, but have also impacted the ability of listeners to freely consume.

Take this encounter, for example:

While humorous, it is also realistic. In the privacy of homes (or cars), in the safe space of like-minded individuals, and through the headphones attached to portable music players, consumption has always crossed racial lines. In the public arena though, Black listeners, just like Black musicians, have not had the same affordances as white listeners and white musicians to express and participate in non-conformity. This does not erase the fact that these non-conforming consumers exist, or that they have power. The boundaries of genre and the identities they reflect and produce are imposed from the top down and developed from the bottom up. The old institutions will try to exercise and maintain control in the digital age where they can (just as in the physical world), but our current media environment is more consumer driven than ever. Consumers now want to pull or seek out music, rather than having it solely pushed on them.

We are in a period of transformation. The Internet and new digital technologies have forever altered the landscape of the music industry. The traditional gatekeepers might not be interested in taking risks as they determine how to survive in the digital age, but they no longer hold all of the power. Digital distribution creates new opportunities to have music widely heard outside of established structures. Once revered music critics, many of whom contribute(d) to and reinforce(d) racialized genre boundaries, are now faced with competition from music recommendation websites, music blogs, amateur reviewers and more. In the past radio might have been the medium that enabled artists to spread their music, but the Internet is quickly coming for that title. In an attempt to more properly reflect consumption by listeners, the music charts in Billboard magazine have now begun including YouTube plays and Internet streaming into its calculations. Potentially, anyone can make it to the top.

The current moment is providing the opportunity to complicate our understanding of racialized sound. The dominant conceptions of what it means to be Black, what it means to be white, and what it means to create music through those identities are being challenged.

Like this:

Splitting from genre tradition does not have to erase social identities and histories; it can allow music to expand and breathe. The remix and mashup practices of both hip-hop and electronic music have demonstrated how boundaries and identities can be reimagined in a way that recognizes points of similarity, but also celebrates difference.

Doing so is how we get songs such as this:

And artists who make music like this:

Holes are increasingly punched into the lines drawn to bound music and culture. The racial and ethnic makeup of America is changing. From musicians challenging stereotypes in the studio and on the stage, to series such as NPR’s “When our kids own America,” people are taking notice. The mutations in popular music that led to its evolution have always been about looking back, looking left, and looking right in order to look forward. The digital revolution is about providing the tools to make this happen.

Rebecca Johnson is a songwriter and Ph.D. student studying the commodification of American popular music in the digital age at the USC Annenberg School for Communication and Journalism. Her work explores how music is produced, marketed, distributed and consumed in a constantly changing, technologically driven, and globally connected world.

Work/Life Balance as Women's Labor

This is another in a series of blog posts produced by students in my Public Intellectuals seminar. Worklife Balance as Women's Labor by Tisha Dejmanee

When Anne-Marie Slaughter’s 2012 Atlantic article “Why Women Still Can’t Have It All" came out, I have to admit I was crushed. Knowing that I want both a career and family in my future, Slaughter's advice was demoralising. However, what upset me more was her scapegoating of the feminist movement as a way of rationalising her own disappointment. This led me to explore the continuing unsatisfactory support faced by parents in the workplace, as well as the injustices inherent in the public framing of worklife balance.

Worklife balance is a catchphrase endemic to contemporary life. Despite its ambiguous nomenclature and holistic connotations, worklife balance is a problem created for and directed towards professional, middle-class women with children. Exploring the way this concept has captured the social imaginary reveals that the myth of equality and meritocracy persists in spite of evidence that structural inequalities continue to perpetuate social injustices by gender, race, class and sexuality. It also exposes the continuing demise of second wave feminism in favour of narratives of retreatism, the trend for women to leave the workforce and return home to embrace conservative gender roles.

The circulation of worklife balance as a women’s issue is only logical in an environment where the private and public spheres remain sharply divided. While gendered subjects may traverse between the two spheres, they maintain primary, gendered obligations to one sphere. Traditionally, men have occupied the public sphere while women occupied the private sphere. The work-life strain that we currently see is a remnant from the battle middle-class women fought in the 60s and 70s, as part of the agenda of liberal feminism, to gain equal access to the ‘masculine’ public sphere. This access has been framed as a privilege for women who, through the 80s – with the enduring icon of the high-paced ‘superwoman’ who managed both family and career – until the present have been required to posture as masculine to be taken seriously in the public sphere, while maintaining their naturalised, primary responsibilities within the private sphere.

The unsustainability of such a system is inevitable: Women must work harder to show their commitment to the workplace, in order to fight off the assumption that their place is still in the home. The gravitational pull of domesticity and its chores remain applicable only to women, whose success as gendered subjects is still predicated on their ability to keep their house and family in order. Men have little incentive to take on more of the burden of private sphere work, as it is devalued and works to destabilise their inherent male privilege (hence the popular representation of domestic men in ads, as comically incompetent, or worthy of laudatory praise for the smallest domestic contribution). Accordingly, women feel the strain of the opposing forces of private and public labour, which relentlessly threaten to collide yet are required to be kept strictly separated.

guyswithkidsMoreover, worklife balance is regarded as an issue that specifically pertains to mothers, because while self-care, relationships with friends and romantic relationships might be desirable, all of these things can ultimately be sacrificed for work. Motherhood remains sacred in our society, and due to the biological mechanisms of pregnancy, is naturalised both in the concept of woman and in the successful gendered performance of femininity. This explains the stubborn social refusal to acknowledge child-free women as anything but deviant, and the delighted novelty with which stay-at-home dads are regarded which has been popularised in cultural texts, for example by the NBC sitcom “Guys with Kids” or the A&E reality television show "Modern Dad" that follows the lives of four stay-at-home dads living in Austin, Texas.

Photo of Crossfit Session by Pregnant Woman that Caused Internet to Explode

Motherhood heightens worklife balance in two particular ways: Firstly, it exacerbates the difficulties of attaining success in work and at home, because the demands set for mothers in contemporary life are becoming increasingly, irreverently, high. Fear has always been used to motivate mothers, who are blamed for everything from physical defects that occur in-utero to the social problems which may affect children in later life, as can be seen from the furore that ensued when a mother posted this photo of herself doing a crossfit workout while 8 months pregnant with her third child. However, motherhood has become a site that demands constant attention, a trend that Susan Douglas and Meredith Michaels call ‘new-momism’ ‘the new ideal of a mom as a transcendent and ideal woman who must “devote … her entire physical, psychological, emotional, and intellectual being, 24/7, to her children” (The Mommy Myth 2004, p. 4).

Secondly, motherhood physically and symbolically hinders the motility of women across the boundary from private to public as it reinforces the materiality of female embodiment. Women’s bodies are marked by the fertility clock, imagined in popular rhetoric as a timebomb that threatens to explode at approximately the same time as educated women are experiencing major promotions in their careers, and provides a physical, ‘biological’ barrier to accompany the limit of the glass ceiling. If infants are miraculously born of the barren, middle-aged professional woman’s body, they are imagined in abject terms – hanging off the breasts of their mothers while their own small, chaotic bodies threaten disruption and chaos to the strictly scheduled, sanitized bureaucratic space of the office. This is the infiltration of home life that is epitomised when Sarah Jessica Parker, playing an investment banker with two small children, comes to work with pancake batter (which could just have easily been substituted with baby vomit or various other infant bodily fluids) in the 2011 film I Don’t Know How She Does It.

Public attention to this issue has recently been elevated by the publication of writings from high-profile women, including Anne-Marie Slaughter, Sheryl Sandberg and Harvard professor Radhika Nagpal; speculation about high-profile women such as Marissa Mayer and Tina Fey; and through fictional representations of women such as the film (originally book) I Don’t Know How She Does It, Miranda Hobbes in television show Sex and the City; and Alicia Florrick in television drama The Good Wife.

Returning now to Slaughter’s article, she details her experience by frankly discussing the tensions that arose from managing her high-profile job as the first woman director of policy planning at the State Department. Specifically, the problem was her troubled teenage son who she saw only when she travelled home on weekends. Ultimately, Slaughter decided to return to her tenured position at Princeton after two years: “When people asked why I had left government I explained that I’d come home not only because of Princeton’s rules … but also because of my desire to be with my family and my conclusion that juggling high-level government work with the needs of two teenage boys was not possible.”

While Slaughter remains Professor Emeritus at an ivy league university (and her insinuation in the article that academia is the “soft option” is certainly offensive to others in the academy), she speaks of her experience as a failure of sorts, which is affirmed by the response of other women who seem to dismiss her choice to put family over career. Slaughter sees this as the culture of feminist expectation set for contemporary, educated young women, what Anita Harris would call ‘can-do’ girls: “I’d been part, albeit unwittingly, of making millions of women feel that they are to blame if they cannot manage to rise up the ladder as fast as men and also have a family and an active home life (and be thin and beautiful to boot).” In making this claim, Slaughter becomes a pantsuit wearing, professional spokesperson for retreatism.

Diane Negra discusses retreatism in her 2009 book, What a Girl Wants?: Fantasizing the Reclamation of Self in Postfeminism. Negra describes retreatism as ‘the pleasure and comfort of (re)claiming an identity uncomplicated by gender politics, postmodernism, or institutional critique’ (2010, p. 2). She describes a common narrative trope wherein ‘the postfeminist subject is represented as having lost herself but then [(re)achieves] stability through romance, de-aging, a makeover, by giving up paid work, or by ‘coming home’ (2010, p. 5). Retreatism takes cultural form through shepherding working women back into the home using the rhetoric of choice, wherein the second wave slogan of choice is subverted to justify the adoption of conservative gender positions. Retreatism is also reinforced through the glamorisation of hegemonically feminine rituals such as wedding culture, domestic activities such as baking and crafting, motherhood and girlie culture.

In keeping with the retreatist narrative, the personal crisis Slaughter faces is the problematic behaviour of her son, and ultimately she decides that she would rather move home to be with her family full-time rather than continue her prestigious State job. While this personal decision is one that should be accepted with compassion and respect, Slaughter uses this narrative to implicate the feminist movement as the source of blame: “Women of my generation have clung to the feminist credo we were raised with, even as our ranks have been steadily thinned by unresolvable tensions between family and career, because we are determined not to drop the flag for the next generation. But when many members of the younger generation have stopped listening, on the grounds that glibly repeating ‘you can have it all’ is simply airbrushing reality, it is time to talk.”

This backlash against feminism is not nearly as novel nor fair as Slaughter suggests, but it does uncover (as I suggested earlier) the continuing scapegoating of the feminist movement as the source of women’s stress and unhappiness, in preference to addressing the rigid structural and organisational inequalities that require women to stretch themselves thin. Slaughter does not acknowledge the personal benefits she has received from the feminist movement and women who helped pave the way for her success, and in doing so contributes to the postfeminist belief that second wave feminism is ‘done’ and irrelevant – even harmful – in the current era.

Slaughter suggests that women loathe to admit that they like being at home, which more than anything reveals the very limited social circle that she inhabits and addresses. Retreatism glorifies the home environment, and this new domesticity – as stylised as it is mythical – is the logical conclusion to Slaughter’s assertion that women cannot have it all. Moreover, men become the heroes in this framing of the problem, celebrated not so much for their support in overturning structural inequalities, but for their ‘willingness’ to pick up the slack – typically comprising an equal share of the labour – around the home.

What bewilders me most about this account is Slaughter's need to discredit the feminist movement. Without trivialising the personal decisions made by Slaughter and many other women in the negotiation of work and childcare, at some point the glaring trend towards retreatism must be considered as more than a collection of individual women’s choices: It is a clue that systematic, institutionalised gender inequality continues to permeate the organisation of work and the family unit.

Slaughter points out the double standard in allowing men religious time off that is respected, but not having the same regard for women taking time off to care for their families; the difference in attitude towards the discipline of the marathon runner versus the discipline of the organised working mother. To me, this does not indicate a failure of feminism – it suggests that feminism has not yet gone far enough. However, yet again the productive anger that feminism bestowed upon us has been redirected to become anger channelled at feminism, taking away the opportunity to talk about systematic failures in the separation of private and personal life, and their continued gendered connotations.

Slaughter's opinion, although considered, is not the final word on this issue. There are many unanswered questions that arise from her article, including:

  • How we might understand the craving for worklife balance as a gender-neutral response to the upheaval of working conditions in the current economic, technological and cultural moment
  • How technology and the economy are encouraging ever more fusions between the personal and the private, and what the advantages and disadvantages of such mergers might be
  • How to talk about worklife balance for the working classes, whose voices on this matter are sorely needed
  • How to talk about women in a way that is not solely predicated on their roles as a caregivers to others

We need people from many different perspectives to share their experiences and contribute to this discussion, and I invite you to do so in the comments below.

Tisha Dejmanee is a doctoral student in Communication at the University of Southern California. Her research interests lie at the interface of feminist theory and digital technologies, particularly postfeminist representations in the media; conceptualising online embodiment; practices of food blogging and digital labour practices.

What Do We Expect from Environmental Risk Communicators?

This is another in a series of blog posts from the PhD students taking my class on Public Intellectuals: Theory and Practice at USC's Annenberg School of Communication and Journalism.  

What do we expect from environmental risk communicators?

by Xin Wang

A recent poll conducted by New York Times showed that although many Americans are dedicated in principle to the generic “environmentalist” agenda,  we -- as individuals -- stop short of enacting real changes in our habits and in our daily lives, changes that would help undo some of the ecological devastation we claim to be concerned about. For example, the alarm of global warming or climate change has been sounded repeatedly, but the society collectively and individually still generally turn a deaf ear partly because they assume the potential risks of sea level’s rising and glacial melting as chronic, diffuse in time and space, natural, and not dreadful in their impact. Continued exposure to more alarming facts does not lead to enhanced alertness but rather to fading interest or ecofatigue, which means we pay “lip service” to many environmental concepts engaging in the behaviors necessary to turn concepts into action, or we just become increasingly apathetic. In short, we are a society of armchair environmentalists.

The burgeoning civic discourses about environmental issues must confront this apathy. Our perspectives on environmental issues are influenced by official discourses such as public hearings and mass-mediated government accounts: we learn about environmental problems by reading reports of scientific studies in national and local newspaper; by watching the Discovery Channel and listening to NPR’s Living on Earth; by attending public hearings or events. By nature, however, these official environmental discourses tend toward a monologic framework that obscures the diversity and suppresses, rather than elicits, the dialogic potential of any utterance.

So here is our question: what kind of environmental risk communicators do we really need?

One challenge to effective environmental risk communication is that the narrative of environmental apocalypse still dominates as a standard rhetorical technique in communicating environmental problems to the public. Apocalyptic prophets continue, however, to blow the whistle on existing and developing environmental problems. Films such as The Core (2003) and The Day After Tomorrow (2004) suggest that our biggest threat is the earth itself. While scholars agree that such apocalyptic narratives can initiate public discourse about and intervention in impending ecological disaster, the overuse of fear discourse is highly controversial considering its euphemism, vagueness, and hyperbole, which often lead to procrastination and inaction. Those who are frightened, angry, and powerless will resist the information that the risk is modest; those who are optimistic and overconfident will resist the information that their risk is substantial.

Another challenge facing environmental communication results from difficulties in producing knowledge to support improved decision making. Undoubtedly, the society requires knowledge in engineering and natural sciences, yet this is apparently insufficient for producing a transition to more sustainable communities. To wit, in the traditional technocratic model where there is little or even no interaction between scientific experts and the public, scientists decide what to study and make information available to society by placing it on a “loading dock”, then waiting for society to pick up and use it. This process has largely failed to meet societal needs.

Environmental concern is a broad concept that refers to a wide range of phenomena – from awareness of environmental problems to support for environmental protection – that reflect attitudes, related cognitions, and behavioral intentions toward the environment. In this sense, public opinions and media coverage play a significant role in evicting questions, causing changes, resolving problems, making improvements, and reacting to decisions about the environment taken by local and national authorities.

On the other hand, under the social constructionist model which focuses on the flow of technical information and acknowledges the shared values, beliefs, and emotions between experts in science and the public, an interactive exchange of information takes place: it is an improved integration of invested parties, initiatives that stress co-learning and focus on negotiations and power sharing.

Trust or confidence in the risk communicator is another important factor to be taken into account where potential personal harm is concerned: if the communicator is viewed as having a compromised mandate or a lack of competence, credence in information provided tends to be weakened accordingly. Or if the particular risk has been mismanaged or neglected in the past, skepticism and distrust may greet attempts to communicate risks. Apparently, it is more difficult to create or earn trust than to destroy it. If people do not trust an organization, negative information associated with that organization reinforces their distrust, whereas positive information is discounted (Cvetkovich et al. 2002).

When the control of risk is not at the personal level, trust becomes a major and perhaps the most important variable in public acceptance of the risk management approach. The single biggest contributor to increasing trust and credibility is the organization’s ability to care or show empathy.

On the one hand, when experts refuse to provide information, a hungry public will fill the void, often with rumor, supposition, and less-than-scientific theories. Silence from experts and decision makers breeds fear and suspicion among those at risk and makes later risk communication much more difficult. On the other hand, information alone, no matter how carefully packaged and presented, will not communicate risk affectively if trust and credibility are not established first.

It is time to advocate a new environmental risk discourse as well as to develop a practical wisdom grounded in situated practice on the part of communicators. Risks and problems are socially constructed. While grave threats may  exist in the environment, the perception of such danger, rather than the reality itself, is what moves us to take actions.

Culture, social networks, and communication practices are nuanced, specific, locally based, and often highly resilient. Our objective of effective and productive environmental communication should be in democratizing the way control affects how people define risk and how they approach information about risk, and in “formulating the meaningfulness of conversational interaction to participants in terms they find resonant, important to them, and thereby opening portals into their communal standards for such action” (Carbaugh, 2005, p. Xiii).

Xin Wang, Ph.D.student at Annenberg School for Communication, University of Southern California. M.A. in Mass Communication and B.A. in Russian language at Peking University, China. She has eight years of working experience in professional journalism, media marketing and management at the People's Daily, a co-founder of a weekly newspaper China Energy News. Her current research interests concentrate on risk and environmental communication, nation branding, public diplomacy, and civic engagement.

"To JK Rowling, From Cho Chang": Responding to Asian Stereotyping in Popular Culture

This is the second in a series of blog posts produced by the students in my Public Intellectuals seminar at USC's Annenberg School of Communication and Journalism. We appreciate any feedback or response you'd like to share.  "To JK Rowling, From Cho Chang": Responding to Asian Stereotyping in Popular Culture

by Diana Lee

 

Awhile back, my friend and fellow graduate student sent me a simple email with the subject line, “Calling out the representation of Asian women in Harry Potter books,” and a short one-line message:

“Saw this and thought of you – the videos are really interesting!!”

I was immediately intrigued by her email because she is, as I mentioned, a respected friend and colleague, and I know she is thoughtful about her communication, but also because before clicking on the link, I wasn’t certain what she was referring to. Had she thought of me because of my love of the magical world of Harry Potter? My academic and personal interest in representations of Asians, Asian Americans, and gender in U.S. media and popular culture? Was it because of my deep appreciation of people who create counter-narratives that challenge stereotypes and forefront voices that are not frequently heard in “mainstream” dialogue? Or perhaps it was a show of support for my hopeful but yet-to-be-solidified desire to combine my enthusiasm for all of these things in academic life? The answer? Yes. Turns out she was pointing me towards something that exemplified all of these things, and much more.

 

In April 2013, the Youtube video of college student spoken word artist Rachel Rostad’s “To JK Rowling, from Cho Chang” went viral. In the video, she is shown performing a poem which challenges the representations of Asian women and other marginalized groups in the Harry Potter series and other popular stories (such as Ms. Saigon, Madame Butterfly, and Memoirs of a Geisha). Written and delivered in the style of many spoken word performances, Rachel uses a powerful, clear, strong voice, and the poem is filled with provocative examples artfully expressed to maximize emotional impact.

You can hear her frustration with the fact that Asians and Asian women in the U.S. are constantly misrepresented in shallow and/or stereotypical roles in books, movies, and television shows. You follow along as she exposes the subtle but sadly pervasive ways these caricatures are presented – with “Asian” accented English, with “foreign” names that may or may not make sense in actual “Asian” languages, as disposable minor characters used to set up the focus on the White, leading woman who is the “real” love interest, as sexually “exotic” Asian women who are submissive and/or hypersexualized, and only to be used and then discarded or left behind. And finally, at the end of the poem, through a story that comes across as her own, she draws a connection between why we should pay attention to these limited representations and speaks about an example of how they can influence our everyday interactions.

She makes the case that when we don’t see other representations of characters with depth and breadth that look and sound and think and love like real people, then the limited portrayals can impact not only the possibilities we can imagine for ourselves, but also what others see as possible for or with us. Rachel Rostad used this poem as a creative and powerful challenge to pervasive, potentially damaging, familiar portrayals, urging us to think critically about the stories, images, and identities we participate in consuming and perpetuating.

But that wasn’t all. She also did something else that we should highlight and celebrate.

The video of her spoken word performance went viral, generating a flurry of criticism and commentary. Instead of ignoring or dismissing these responses or getting pulled into destructively defensive or combative flame wars, she used the opportunity to reflect, learn, teach, and engage in a (public) conversation. She did this in a number of ways and across multiple platforms (e.g., YouTube, tumblr, blog posts, Facebook), but the most visible one is the follow-up video she created a few days after the first video was posted, which was called, “Response To Critiques of ‘To JK Rowling, from Cho Chang.’”

In the response video, Rachel speaks directly to the camera, beginning with “hey there, Internet!,” and point by point, articulately, sincerely, and calmly addresses the five main critiques that came pouring in to her through the various mediated forms she was involved with.

One point addressed the possibility that “Cho Chang” could be a legitimate name, despite what she articulated in her poem about “Cho” and “Chang” being two last names. Rachel admitted ignorance in Chinese and other naming practices, acknowledged that the line in the poem about Cho’s name was problematic, apologized for marginalizing and misrepresenting parts of a community she was trying to empower, and urged viewers to also focus on the other themes she draws on in the rest of the poem.

Following that, Rachel’s next series of comments emphasized that she does not speak for all Asian women, and is not claiming to through her work. She apologized again for unintended mischaracterizations, especially to those who reached out to her saying they felt misrepresented by her poem. Rachel then used these interactions to encourage reflection about and reemphasize the importance of a wider range of media and pop cultural portrayals for Asians and Asian women, which was one of the main points of her spoken word piece, saying, “I’m very sorry I misrepresented you. But I don’t think either of us is to blame for this. I would ask you, what conditions are in place that make it so that you are so defensive that I, someone with a completely different experience of oppression, am not representing your voice? It’s sad that we live in a society where my voice is so easily mistaken for yours - where our differing identities are viewed as interchangeable.”

Continuing on the theme of advocating for a wider range of media representations and prompting us to think critically about the representations we do see, Rachel’s third point was about the realistic portrayal of a grieving character versus in-depth character development of Asian and White female lead characters. Yes, Cho was sad about boyfriend Cedric Diggory’s death and confused about her developing feelings for Harry so she was crying, pensive, and sad most of the time, but JK Rowling intentionally set up Cho as weak to make Ginny, Harry’s eventual love interest and a White woman, look stronger. This may not have been intentionally racially charged, but it is important to think about because discrimination and prejudice are oftentimes not only about intentionality. This is a theme that recurs in mainstream films and stories – women of color appear as minor, brief, undeveloped characters to set up the “real” relationship for the main White characters later in the narrative, and the more we see it repeated with no alternative portrayals, the more it has the potential to seem “natural.”

Similarly, her fourth point is also about different ways that problematic representations take form, describing why she talked about Dumbledore’s sexuality, which some argued seemed tangential to the main themes of the poem. Thinking about representation and whose stories are privileged is not only about who is invisible from the story, or the limited roles people are allowed to play, but it is also about considering that even when in prominent positions within stories, aspects of character’s identities aren’t developed in a way that illustrates the depth and complexity of our lived realities.

Rachel’s fifth point in her response video is about how she does not hate Harry Potter or JK Rowling (or Ravenclaw, I’m assuming!), but rather is a fan who grew up on the books and went to all the midnight showings, and importantly, is also able to think critically about and critique a world that she also enjoys.

And finally, she closes the response video with this message:

That's it for now. I understand if you still disagree with me, but I hope you now disagree with me for the arguments I'm actually making, and it's been humbling and amazing to watch people respond to this video. I think that the presence of so much passionate dialogue means that this is an issue that needs to be talked about. And yes, I made mistakes, and just as I think JK Rowling did with some of her characterizations. But what I hope people realize is that dialogue about social justice is not about blaming people for making mistakes, whether it's me or JK Rowling. It's about calling attention to mistakes, which I'll be the first to admit, is painful, and using those mistakes as an opportunity to grow.

I personally have learned so much from the mistakes I've made in this process, and I want to thank the community for calling me out on that. Social justice is about holding each other accountable. And I hope as a devoted fan base and as an amazing community we can continue to use my piece as a jumping off point for further dialogue, growth, and reflection.

Thank you.

We should celebrate this as an exemplary instance of reflexivity, of praxis  – of the liberating power of reflection, awareness, and action. Of an intelligent, passionate young person invested in learning from and contributing to her community. Of the engaging, participatory possibilities available through working with popular culture and using technology, media, and newer forms of mediated communication as tools for transformative education. This is also a great opportunity for educators and families to unpack the pedagogical implications of what happens when you find something young people are excited about and engage in this kind of expression and communication with a larger community. And this is also an example of what media scholar Henry Jenkins calls participatory culture, specifically, in relation to new media literacies and civic engagement.

These words – reflexivity, praxis, pedagogy, new media, media literacy, civic engagement – get thrown around in academic institutions and circles all the time. And we should continue to teach them and explore their philosophies and apply their meanings, but we cannot forget the importance of grounding them in concrete examples, not just for academics or practitioners, but also for people who may not use these same terms, but still find the practices empowering. Rachel Rostad’s two videos, along with her other critical engagement with this discourse, is a great, accessible example of what we hope that people of all ages are doing when they are participating in mediated communication, engaging with popular culture, and otherwise interacting out in the world. We hope that people are actively engaged with their media and popular cultural stories and artifacts, and with each other. We hope that people are thinking about ideas, sharing them, playing with and acting on them, challenging each other and working out responses, incorporating new information, helping each other to learn and grow, and then repeating that process again and again.

By following Rachel’s videos and active online engagement with a larger community, we can literally see and hear this messy, discursive, interactive transformative learning and teaching process unfolding. One of the key messages Rachel communicates is that she has learned a lot through the process of writing this poem, performing it, posting it online, receiving and engaging with all of the responses to it, and creating the follow-up video. She used these interactions in several ways. She apologized for areas where she made mistakes, where she misrepresented or silenced populations she was trying to empower. She clarified her perspective or points that either were not clear through the delivery of the poem, or could not be expanded on given the format of spoken word poetry. She used the experience as a way to take in and address critiques about areas she could have presented differently. And finally, she also spoke about the complexity of representation and tokenization both within the poem itself, which spoke about popular cultural representation via Cho Chang, but also in terms of her speaking about these topics from her positionality as a woman of color, who is considered “Asian” in the United States. She shared with us her processing and grappling with these issues, thanked all those that responded or commented, and kept the door open for future dialogue over these issues. She did all this, and publicly. What a courageous thing to do.

In one instance, for example, a blogger at Angry Asian Girls United (AAGU) posted a response to the original poem, calling Rachel out for her glaring misrepresentation of South Asians and for ignoring Pan-Asian solidarity and identity. The blogger also critiqued her for taking up the term “Brown” as an East Asian, and for conflating the varying meanings of the “Asian” label when you consider U.S. versus U.K. contexts (where Harry Potter’s world is set; because then “Asian” would refer to Indians or other South Asians, e.g., Sri Lankans, Bangladeshis, Pakistanis, and people who look like Cho Chang would be referred to as “Chinese,” “Korean,” or another label associated with the East Asian country they hailed from). In the poem, Rachel speaks of “Asians” as if it equates to East Asian, and does not count Indian characters Padma and Parvati Patil when she speaks of “Asians,” a common issue when people have not fully unpacked the deeply ingrained, volatile, problematic U.S. (and global) race categorizations. In the poem, Rachel does include the Patil twins, as well as herself, as “Brown” and “minority.” I believe she used the term “Brown” as an attempt to empower and connect with discourses of identity politics that are possibly specific to U.S. cultural context, but whether (East, South, Central, Middle Eastern) Asians are included in that term, or whether she should or can use it can be saved for a separate conversation. The point is, the AAGU blog post called her out on her conflation, misrepresentation, and the silencing nature of Rachel’s use of these terms, and she heard them, learned from and engaged with the criticism.

This one relatively small but important part of this larger example shows how many people were actively engaged, challenging both their own and each other’s thinking processes through this topic, and in a public way, which allowed others (like me and you) to witness and/or participate in the discussion, even months later. Additionally, Rachel and the people commenting on her videos were engaged through the combination and use of multiple, networked avenues. In the example above, the video was first posted on YouTube, which acquired comments. Rachel was looking for a way to succinctly learn about and respond to the YouTube comments about the misrepresentation of South Asians, which she found through the Angry Asian Girls United blog post. Rachel then linked to this blog post and responded to it on her tumblr site, and also apologized for misrepresenting Asian women in her response video, which was posted on YouTube. And to begin with, both Rachel and the AAGU blog poster were familiar with Harry Potter, whether through the books or movies (or some other form, e.g., video games, news, conversations), and the conversation and rapid spread of the discussion were supported because of an already existing Harry Potter fan base. Through their dialogue, both Rachel and the AAGU blogger were pushing ideas of the fluid markers and conceptions of identity and how it impacts and is impacted by visibility, representation, and socio-historic context.

When thinking about social justice or civic engagement, we often look for big things – movements, large groups mobilizing many people – but sometimes we should shift our perspective a bit and focus in on the incredible things small groups of individuals are doing. They can also have a big impact. Word of mouth is a powerful thing.

In the end, it all comes full circle. Rachel shared her performance and reflection with communities in person and on the internet, my friend shared the videos with me, and now I’m sharing my thoughts with you. I will join both Rachel Rostad and my friend and echo their sentiments. Hi. I saw this, and thought of you. The videos are really interesting! I hope we can continue to use these conversations as a jumping off point for further dialogue, growth, and reflection. Thank you!

For more information on Rachel Rostad, visit her tumlbr or facebook pages. Rachel is currently a student at Macalester College in St. Paul, Minnesota. In keeping with voicing diverse stories about Asian Americans, she also has a great poem, “Adoption,” which is about identity, belonging, and family, from the perspective of a Korean adoptee growing up with a non-Korean family in the U.S., and one of her latest poems is about her Names.

Diana Lee is a Ph.D. student at USC Annenberg School for Communication and Journalism. Her work explores representations of race, ethnicity, gender, nationality, and other aspects of identity in U.S. media and popular culture. Specifically, she focuses on media portrayals of Asians and Asian Americans, implications of stereotypical and limited representation, and the educational and empowering nature of counter-narratives. Of particular interest are counter-narratives created through networked, mediated expressions, as well as participatory experiences and communities.

 

Who Reaps the Rewards of Live-Tweeting in the TV Attention Economy?

A few weeks ago, I shared the syllabus of a new class I am teaching this term in the Annenberg School of Communication and Journalism focusing on helping my students acquire the skills and knowledge they are going to need to put a more public-facing dimension on their scholarship. This experiment in training public intellectuals has continued with great success. Over the next few weeks, I am going to be sharing here the blog posts my students produced, reflecting a range of different research interests in media and communications. These posts should allow you to hear the voices and get a sense of the commitments which drive this generation of students. I am pleased to be sharing their work with the world and I am enormously proud of what they've been able to accomplish in a very short period of time.  

Who Reaps The Rewards of Live-Tweeting In the TV Attention Economy?

by Katie Walsh

When the newest season of The Bachelorette rolled around this summer, I was excited, not just for the male antics and cattiness I unabashedly take pleasure in as a fan, but as a writer and media studies student interested in the machinations of reality TV. I was pleasantly surprised to find that the show’s 9th season would provide a wealth of resources for my research into reality TV fans and their online productions. With the first TV spots advertising “MAN TEARS” in bold letters, I rubbed my hands together in anticipation of the humiliations Mark Burnett and co. would subject to newest bachelors, feeding the needs of the tweeting, recapping viewers who tune in for the spilled tears, and not so much the fairytale romance.

I began my research into The Bachelor/Bachelorette ABC franchise as a fan (albeit as the kind of fan who tuned for the opportunities to snark with friends). Despite, or because of, my feminism, I was compelled by the showy courtship ritual-turned-competition, and the best and worst that it brought out in its participants. As an avid reader of recaps, and a live-tweeter, I wanted to understand more about how these disparate texts worked in tandem with each other. I am of the mind that the instant feedback received by reality show producers, in the form of recaps, comments, and live-tweets has influenced the shape of their product, particularly in terms of the narrative.

Anna McCarthy describes the reality TV genre as a “theater of suffering,”[1] and part of reality TV fandom is often the voyeurism and perverse pleasure in the exposure of humiliation and pain that these love-seekers go through (exhibit A, the blog Forever Alone: Faces of Rejected Bachelorettes, which consists solely of screenshots of crying women). Women are expected to perform their gender properly in order to “win” the bachelor. When they fail, often by committing the cardinal feminine sin of being “mean,” or being too sexual (or not sexual enough—just the right amount!) they are either rejected, or humiliated, or both, by both fellow cast members and producers in the way they shape the narrative. The camera invades private moments of suffering, such as the infamous reject limo shots, which shine a spotlight on the women as they mourn their rejection. This practice has seeped into the The Bachelorette, as men descend into gossiping and accusations about those who are “not there for the right reasons.” Male contestants are emasculated onscreen for their over-the-top demonstrations of emotion, such as singing embarrassing ballads, too-soon declarations of love, and crying (hence the advertised “MAN TEARS”). Both of these practices reinforce strict, and traditional, gender stereotypes: women must be nice and not too sexual, but not frigid, and men must be stoic, masculine, and controlled.

My initial assertion was that these recaps and live-tweets demonstrate the “anti-fan” readings that many fans engage in, and that they affected the show in the editing and narrative structure. But, this summer, Team Bachelorette did me one better: they slapped those tweets, good, bad, and snarky right onto the screen. The narrative structure of a show like Bachelorette is uniquely suited to a social media enhanced viewing experience, which is often a necessary part of getting through the bloated two-hour running time.  Knowing that some viewers were engaging with a second screen platform during the show, producers brought that second screen inside the screen itself, keeping viewers’ eyes on both their tweets and on advertisements. One can’t really live tweet unless one is watching during the broadcast time, which means real time ads—no fast-forward or time shifting here. This is all the better for letting advertisers know just how many activated, interactive viewers are participating—lucrative commodities for sale in the attention economy.

Bachelorette allowed a variety of tweets onscreen, in terms of content and sentiment about the show. Not all of the tweets were straightforward fan praise, so it’s clear that Team Bachelorette is fully aware of those audience members who like to poke fun at the extravagant love contest. It’s unclear how exactly they chose the onscreen tweets, many hashtagged #TheBachelorette or tagged the official account @BacheloretteABC, but this wasn’t consistent across the board. There were several tweets featured from the Bachelor “fan” site Bachelor Burn Book (their Twitter bio reads, “How many witty and sarcastic comments can we make about The Bachelor/Bachelorette/Bachelor Pad? The limit does not exist”) often snarking on the contestants’ hair or even host Chris Harrison’s shirt choices (they hate the bright colors and prints).

Part of the appeal of competitive reality shows is the viewer’s ability to participate in the game itself through technology, whether it’s determining who goes home each week on American Idol or The Voice, or voting for a fan favorite contestant on Top Chef or RuPaul’s Drag Race, a process that Tasha Oren refers to as a ludic or game-like interaction in her article “Reiterational Texts and Global Imagination.” Live-tweeting can be a way for fans to participate with a favorite TV text in an increasingly interactive culture, but that practice isn’t just simply fan-generated fun. These interactive efforts are also extremely lucrative for networks and producers vying for viewers. Oren states, “game culture and the rise of a play aesthetic have not only emerged as an organizing experience in media culture but are central to an industry-wide reconfiguration towards interactivity and intertextual associations across media products.”[2] Viewer interaction is now an integral part of television viewing, as a result of an industry-wide effort to sustain attention in a marketplace where attention is increasingly fragmented.

This attention is what television networks and producers need to sell to advertisers, which is why viewer activation is such a priority for those networks losing viewers to Hulu, Netflix, and DVR. Live-tweeting is something that fans do to enhance the viewing experience for themselves, but they’re also providing an important service to networks, offering not only feedback but numbers of engaged audience members. They’re doing the heavy lifting of audience activation by activating themselves. Their efforts are then incorporated into the show’s text, which enhances the product for the fans, yes, but they are essentially “buying” that product with their attention, twice.

This isn’t just a boon for TV networks, as social networks scramble to be the destination for television conversation, as they expand promotions and advertising within their platforms. As Twitter gears up for its IPO, its place at the top of the online TV chatter heap enhances its value. As the New York Times reports this week, “Facebook and Twitter both see the social conversation around television as a way to increase use of their sites and win a bigger piece of advertisers’ spending.” The attention of activated viewers is now being bought and sold on both screens.

The draw to interact with the text for a viewer is often the feeling of control. Audiences that feel like they can affect the outcome feel an authorship or dominance over the text. Snarky tweets about Chris Harrison’s shirt choice allow viewers to feel like ABC isn’t getting one over on them, that they are subverting the system and its production. This phenomenon is explored in Mark Andrejevic’s piece “Watching Television Without Pity: The Productivity of Online Fans,”[3] but his results are less utopian than us anti-fans might imagine: ultimately, this labor still renders the work that we do subject to the mass-mediated text itself, and the networks have created us into an army of viewer-workers, exerting effort to make these shows interesting for us, the product of these efforts splashed onscreen as part of the attention grabbing television show itself.

bachelorette tweet

This onscreen tweet campaign was not a completely harmonious union, however. As mentioned, part of the appeal of interactivity is control, and inundating the lower half of the TV screen with snarky tweets may have been efficient for viewers already used to participating in a second screen experience, but not for all. One tweet slipped through the filter, expressing not humor and snark but dissatisfaction and alienation with the format. It read, “I wish the tweets at the bottom of the screen would go away on the bachelorette. #noonecares.” Entertainment Weekly, Mediabistro, and Buzzfeed, who all reported on it, noted the irony of the dissatisfied tweet slipping through into exactly what it was complaining about in a moment of meta-textual anarchy.

While the onscreen tweet producers work on those cracks in the matrix, it seems appropriate to question the economic implications of this practice. One can argue that fans, especially technologically empowered ones, are going to produce as a part of their consumption, whether or not the networks are involved, so why should it matter if ABC decides to use the products of this practice to their own advantage? However, the exploitation of this production becomes more clear when considering just what ABC uses as a commodity to sell to advertisers: audience attention. The onscreen tweet practice is way to take advantage of that viewer attention even further. Audiences can now pay attention on two screens in order to enjoy the show, which is a value-add for the online and television presence. Are onscreen tweets simply an innocuous add-on to the text of the show itself or a way to capitalize on the increasingly rare attention commodity?

And it’s not just TV networks seeking a piece of the pie, as Twitter gears up for its impending IPO, the amount of TV viewers chattering on their social network makes their public offering even more valuable, as they add more ads and promoted content in users’ timelines. Even Nielsen is getting into the game, finally updating their ratings system with Twitter metrics this week. Though this is a case of the corporations organizing themselves and driven by the habits of consumers, it’s important to examine these practices and see where those benefits actually land in the end.

 

Katie Walsh is a writer, critic, academic, and blogger based in Los Angeles. She has contributed film reviews, TV recaps, and interviews to the The Playlist on indieWIRE since 2009, and has written about media and culture for The Hairpin and Movieline. She received her M.A. in Critical Studies from the USC School of Cinematic Arts in 2013, and is currently a first year doctoral student at the USC Annenberg School for Communication and Journalism, focusing on television and new media.

 


[1] Anna McCarthy, “Reality Television: A Neoliberal Theater of Suffering” Social Text 93, Vol. 25, No. 4. 2007

[2] Tasha Oren “Reiterational Texts and Global Imagination: Television Strikes Back,” in Global Television Formats, ed. Tasha Oren and Sharon Shahaf (Routledge, 2011), 368

[3] Mark Andrejevic, “Watching Television Without Pity: The Productivity of Online Fans,” Television & New Media. 2008.

How Many People Does It Take to Redesign a Light Bulb? USC's Daren Brabham on Crowdsourcing (Part Two)

You seek to distinguish crowdsourcing from  Kickstarter, which get described from time to time in those terms. What is at stake in making these distinctions?

Crowdsourcing is conceptually distinct from crowdfunding. It is only the “crowd” in their names that lumps them together. The one thing they have in common is the existence of an online community.

Where crowdsourcing is a co-creative process where organizations incorporate the labor and creative insights of online communities into the very creation of something (the design of a product, the solving of a tough chemical formula, etc.), crowdfunding involves using the monetary donations from an online community to bring an already conceived concept to market. That is, most of the projects on the crowdfunding site Kickstarter are not asking for individuals to help shape a creative project. Rather, Kickstarter projects are asking for financial backing to bring someone else’s idea into being.

While different, both crowdsourcing and crowdfunding will be enormously important players in the future economy.

While you cite some earlier examples of collaborative production processes, you insist that crowdsourcing really only emerges in the era of networked computing. What did digital media bring to the production process that wasn’t there before?

The capabilities of the Internet – its speed, reach, temporal flexibility, and so on – completely changed the music industry, its legal terrain, and more. The ability to share digital files online quickly, relatively anonymously, all over the world, without loss of file quality – this is why we do not just say that the MP3, torrents, and peer-to-peer file sharing services are fundamentally different from what took place with cassette tape piracy and mix tape sharing. The technology elevated the concept of sharing and piracy to a new level. That is the way I view crowdsourcing and why I see crowdsourcing as really only emerging in recent years.

Many critics like to say that crowdsourcing has been going on for centuries, that any sort of large-scale problem solving or creative project over the last several hundred years counts as crowdsourcing. I think, though, that the speed, reach, temporal flexibility, and other features of the Internet allow crowdsourcing to be conceptually distinct from something like the distributed creation of the Oxford English Dictionary in the 1800s.

You note that crowdsourcing has shown only modest success when tested in the field of journalism. What is it about journalism which makes it harder to apply these models?

There have been plenty of experiments with citizen journalism and other forms of open, participatory reporting, and some of these have even kind of succeeded. I have heard much more critique of crowdsourced journalism than praise, though, and I think I know why. There are four problem types for which crowdsourcing seems to be suitable for addressing. Two of these types address information management problems (the knowledge discovery and management type and the distributed human intelligence tasking type) and two of the types address ideation problems (the broadcast search type and the peer-vetted creative production type).

The problem with some of the crowdsourced journalism experiments that have happened so far is that the writing of entire stories is attempted by an online community. It is still much easier for a single talented writer to report a story than it is to write a narrative by committee. The future of crowdsourced journalism will need to stick closely to these four problem types, crowdsourcing portions of the reporting process while keeping the act of story writing under the control of one or a few trained journalists.

Plenty of steps in the journalistic process can be crowdsourced, though! New organizations can turn to online communities and charge them with finding and assembling scattered bits of information on a given story topic. Or online communities can be asked to comb through large data sets or pore over huge archives for journalists. Crowdsourcing may very well breathe new life into investigative journalism, a rather expensive kind of reporting that some cash-strapped news organizations have drifted away from. Online communities may turn out to be the best tools for journalists hoping to put some teeth back in the Fourth Estate.

 Daren C. Brabham is an assistant professor in the Annenberg School for Communication & Journalism at the University of Southern California. He is the author of the book Crowdsourcing (MIT Press, 2013) and has published widely on issues of crowdsourcing in governance and the motvations of online communities. His website is www.darenbrabham.com.

How Many People Does It Take to Redesign a Light Bulb?: USC's Daren Brabham on Crowdsourcing (Part One)

This week, I want to use my blog to welcome a new colleague to the Annenberg School of Communication and Journalism here at USC. I was lucky enough to have met Daren Brabham when he was still a graduate student at the University of Utah. Brabham had quickly emerged as one of the country's leading experts about crowdsourcing as an emerging practice impacting a range of different industries. The video above shows Brabham discussing this important topic while he was an Assistant Professor at the University of North Carolina. But, this fall, USC was lucky enough to lure him away, and I am personally very much looking forward to working with him as he brings his innovative perspectives on strategic communications to our program.

Brabham's insights are captured in a concise, engaging, and accessible book published earlier this fall as part of the MIT Press's Essential Knowledge series simply titled Crowdsourcing. Brabham is attentive to the highly visible commercial applications of these practices but also the ways that these ideas are being incorporated into civic and journalistic enterprises to change how citizens interface with those institutions that most directly effect their lives. He also differentiates crowdsourcing from a range of other models which depend on collective intelligence, crowd-funding, or other mechanisms of participatory culture. And he was nice enough to explore some of these same issues through an interview for this blog.

The term, “Crowdsourcing,” has been applied so broadly that it becomes harder and harder to determine precisely what it means. How do you define it in the book and what would be some contemporary examples of crowd-sourcing at work?

There is certainly a bit of controversy about what counts as crowdsourcing. I think it is important to provide some structure to the term, because if everything is crowdsourcing, then research and theory development rests on shaky conceptual foundations. One of the principal aims of my book is to clarify what counts as crowdsourcing and offer a typology for understanding the kinds of problems crowdsourcing can solve for an organization.

I define crowdsourcing as an online, distributed problem solving and production model that leverages the collective intelligence of online communities to serve an organization’s needs. Importantly, crowdsourcing is a deliberate blend of bottom-up, open, creative process with top-down organizational goals. It is this meeting in the middle of online communities and organizations to create something together that distinguishes crowdsourcing from other phenomena. The locus of control resides between the community and the organization in crowdsourcing.

One of the great examples of crowdsourcing is Threadless, which falls into what I call the peer-vetted creative production (PVCP) type of crowdsourcing. At Threadless, the company has an ongoing call for t-shirt designs. The online community at Threadless, using an Illustrator or Photoshop template provided by the company, submits silk-screened t-shirt designs to the website. The designs are posted in the community gallery, where other members of the online community can comment or vote on those designs. The highest rated designs are then printed and sold back to the community through the Threadless site, with the winning designers receiving a modest cash reward. This is crowdsourcing – specifically the PVCP type of crowdsourcing – because the online community is both submitting original creative content and vetting the work of peers, offering Threadless not only an engine for creation but also fine-tuned marketing research insights on future products.

Threadless is different from, say, the DEWmocracy campaign, where Mountain Dew asked Internet users to vote on one of three new flavors. This is just simple marketing research; there is no real creative input being offered by the online community. DEWmocracy was all top-down. On the other end of the spectrum is Wikipedia and many open source software projects. In these arrangements, the organization provides a space within which users can create, but the organization is not really directing the day-to-day production of that content. It is all highly structured, but the structure comes from the grassroots; it is all bottom-up. Where organizations meet these communities in the middle, steering their creative insights in strategic directions, is where crowdsourcing happens.

Some have questioned the use of the concept of the “crowd” in “crowdsourcing,” since the word, historically, has come linked to notions of the “mob” or “the masses.” What are the implications of using “crowd” as opposed to “community” or “public” or “collaborative”?

I am not sure that crowdsourcing is really the best term for what is happening in these situations, but it is the term Jeff Howe and Mark Robinson came up with for Howe’s June 2006 article in Wired, which was where the term was coined. It is no doubt a catchy, memorable term, and it rightly invokes outsourcing (and all the baggage that goes with outsourcing). The “crowd” part may be a bit misleading, though. I have strayed away from referring to the crowd as “the crowd” and have moved more toward calling these groups “online communities,” which helps to anchor the concept in much more established literature on online communities (as opposed to literature on swarms, flash mobs, and the like).

The problem with “crowd” is that it conjures that chaotic “mob” image. These communities are not really masses. They tend to be groups of experts or hobbyists on a topic related to a given crowdsourcing application who self-select into the communities – graphic designers at Threadless, professional scientists at InnoCentive, and so on. They are not “amateurs” as they are often called in the popular press. Most of the truly active members of these online communities – no surprise – are more like invested citizens in a community than folks who were accidentally swept up in a big rush to join a crowdsourcing site.

The professional identities of these online community members raise some critical issues regarding labor. The “sourcing” part of “crowdsourcing” brings the issue of “outsourcing” to the fore, with all of outsourcing’s potential for exploitation abroad and its potential to threaten established professions. No doubt, some companies embark on crowdsourcing ventures with outsourcing in mind, bent on getting unwitting users to do high-dollar work on the cheap. These companies give crowdsourcing a bad name. Online communities are wise to this, especially the creative and artistic ones, and there are some notable movements afoot, for example, to educate young graphic designers to watch out for work “on spec” or “speculative work,” which are the kinds of exploitive arrangements many of these crowdsourcing ventures seek.

It is important to note that online communities are motivated to participate in crowdsourcing for a variety of reasons. Many crowdsourcing arrangements can generate income for participants, and there are folks who are definitely motivated by the opportunity to make some extra money. Still others participate because they are hoping through their participation to build a portfolio of work to secure future employment; Kathleen Kuehn and Thomas F. Corrigan cleverly call this kind of thing “hope labor.” Still others participate because they enjoy solving difficult problems or they make friends with others on the website. As long as organizations understand and respect these different motivations through policies, community design, community recognition, or compensation, online communities will persist. People voluntarily participate in crowdsourcing, and they are free to leave a site if they are unhappy or feel exploited, so in spite of my Marxian training I often find it difficult to label crowdsourcing “exploitive” outright.

Daren C. Brabham is an assistant professor in the Annenberg School for Communication & Journalism at the University of Southern California. He is the author of the book Crowdsourcing (MIT Press, 2013) and has published widely on issues of crowdsourcing in governance and the motvations of online communities. His website is www.darenbrabham.com.

Projecting Tomorrow: An Interview with James Chapman and Nicholas J. Cull (Part Three)

 

Henry: War of the Worlds is an interesting case study of the ways that the Cold War impacted science fiction, especially because we can draw clear comparisons to what the story meant at the time Wells wrote it and about the ways Steven Spielberg re-imagined it in the wake of 9/11. So, what do these comparisons suggest about the specificity of the discourse on alien invasion in 1950s America?

James: Wells's novel is an invasion narrative with allegorical overtones - it shows a complacent imperial superpower what it might feel like to be on the receiving end of violent colonization by a technologically superior enemy. It's a story that has been mobilised at times of geopolitical tension: Orson Welles's (in)famous radio broadcast of 1938 came immediately after the Munich Agreement, the 1953 film was made at the height of the Cold War, and, as you say, the 2005 Spielberg film reconfigured the basic story in the context of the War on Terror.

We use the 1953 film, produced by George Pal, as the focus of our case study. This is a case where my understanding of the film was really enhanced by doing the archival research. The archive reveals two particular points of interest. The first is the extent to which the film emphasized Christianity. Now, Wells was an atheist, and the book includes a very unsympathetic charactrization of a Church of England cleric who is both deranged and a coward. In the film, however, Pastor Collins becomes a heroic character, who dies while trying to make peace with the invaders, while the resolution - in which the Martians are eventually destroyed by the common cold bug - is specifically attributed to Divine intervention.

The various treatments and scripts in the Paramount archives show how this element was built up in successive drafts. This is consistent with American Cold War propaganda, which equated the United States with Christianity in opposition to the godless Communists. So, this aspect of the production locates the film of War of the Worlds in the context of US Cold War propaganda, and might prompt us to compare it to other 1950s alien-invasion films such as Invaders from Mars or The Thing.

However, the other point which came out from the archival research, was that the Pentagon, which liaised with Hollywood in providing stock footage and military personnel, refused to co-operate with this particular film. The reason they advanced was that the film showed the US military as unable to repel an alien (for which read Communist) invasion. In the film even the atom bomb is ineffective against the Martians. The Pentagon wasn't happy about this aspect of the film and refused to co-operate. Instead Paramount had to turn to the Arizona National Guard! So, in this regard, the film is not quite the 'official' Cold War propaganda that I had thought - and it was only researching the production history that revealed this aspecy of the film.

 

Henry: Stanley Kubrick is currently being celebrated by an exhibition at the LACMA and he remains a figure who has enormous cultural prestige even now, yet in the case of several of his films, including 2001: A Space Odyssey, A Clockwork Orange, and A.I. (which was made by Spielberg after his death), he worked in SF which has struggled for cultural legitimacy. How might we understand the status attached to these films, given the tendency of critics to otherwise dismiss SF films as brainless entertainment?

 



James: Again this is an example of how the archive illuminates our understanding of the films. The origin of 2001 was Kubrick's desire to make "the proverbially 'really good' science fiction movie" - to which end he invited Arthur C. Clarke to collaborate on the project. Having Clarke on board attached a degree of cultural prestige - like H.G. Wells before he was a well-known author, but also one whose work had a strong scientific basis (the 'science' aspect of science fiction, if you like). It was another case of a problematic relationship between a film-maker and an SF author, as they ended up with rather different ambitions for the film. But I don't think that Kubrick was all that bothered about the low cultural status attached to science fiction. For Kubrick 2001 was really an exploration of existential themes that just happened to be an SF movie. Incidentally, it was while doing the research for 2001, during the course of which he read hundreds of books and articles about science, technology and space travel, that Kubrick came across the article that prompted his interest in 'A.I.' - or Artificial Intelligence.

Henry: You provide some rich insights into the ways that Civil Rights era discourses shaped the making of the Planet of the Apes film series. To what degree do you see the recent remakes of these films retaining or moving away from these themes as they try to make these stories relevant for contemporary viewers?

James: This is a case of how SF responds to the social and political contexts in which it is produced. The first Planet of the Apes in 1968 was quite explicitly about the Civil Rights movement and the relationships between different ethnic groups, if you like, which draws a clear parallel between race and socio-economic status. And the later films in the series, especially Conquest of the Planet of the Apes, make this theme even more explicit. But race doesn't seem quite such an important theme in the more recent films. That's not to say that the issue is no longer important, but rather that the film-makers are now responding to a different set of concerns. I enjoyed Rise of the Planet of the Apes - it's a sort of 'alternative history' of the Apes films - though I didn't feel that it had quite the same polemical edge as the original film series between 1968 and 1973.

Nick: My sense was that the 2011 reboot Rise of the Planet of the Apes was hitting slightly different marks, especially issues around the ethics of bioengineering, and a warning against exploitation whether on class or race lines is still apposite. The Tim Burton take in 2001 seemed rather more in the line of a tribute than a piece with something to say about its own times except ‘we’re running low on ideas.’

Henry: You have no doubt seen the announcement of plans to begin production on a new set of Star Wars films, now that George Lucas is handing over the control of his empire to a new generation of filmmakers. Your analysis of Star Wars explores the ways that Lucas built this saga as much on borrowings of other films and on the core structures of myths and fairy stories rather than on any speculation about real world concerns. He would have described this project as one designed to create “timeless” entertainment. To what degree do you see Star Wars as of its time and to what degree does returning to the franchise now require some fundamental rethinking of its core premises?

Nick: The initial success of Star Wars was absolutely of its time – America was tired of cynicism, Vietnam, Watergate and so forth and looking to escape back to innocence. Lucas gave them their cinematic past in pastiche form and a moral and redemptive message. While I think Lucas intended his own revisiting of the saga in the prequel trilogy to make new points about the vulnerability of democracy and a noble individual to corruption, the new films were really more about Star Wars than anything else. Their performance was not tied to their suitability for the moment in which they appeared but rather the quality (or otherwise) of the effects and story. I think the saga is a powerful enough property to generate into own bubble of relevance which is a kind of timelessness at least as long as people remember enjoying the films. Star Wars has created its own reality and obscured its own source material. Storm Trooper means Star Wars not Nazi Germany to most Americans under fifty.

James: I'd suggest that most, if not all, film genres eventually become self-referential. The main points of reference for the original Star Wars were other movies - as Nick's chapter so brilliantly brings out. For the prequel films the points of reference were not so much other movies as previous Star Wars movies - they feed upon our own memories of Star Wars.

Henry: You describe Lucas as struggling consciously with the racial politics of the adventure genre titles that inform his project, making a series of compromises across the development of the original film in terms of its treatment of race and gender. How do these behind-the-scenes stories help us to understand the ongoing controversy around how Star Wars deals with race and gender?

Nick: I was startled by the extent to which Lucas initially saw Star Wars as a way to get progressive ideas about diversity before an audience. He toyed with the idea of an all Japanese cast, a black Han Solo and a Eurasian Princess Leia (which would have made his later twin sub plot a harder sell) but backed away from these ideas as production got underway. He said he couldn’t make Star Wars and Guess Who’s Coming to Dinner at the same time. His aliens became a device through which he could still have ‘fun’ with difference and notions of the exotic or the savage without worrying about disgruntled Sand People or Wookies picketing Mann’s Chinese Theatre on opening night. I think it is correct to ask questions about the racial politics of Star Wars not so much to question whether George Lucas is a bigot (which I do not think he is) but rather to use Star Wars as a mirror to a society that plainly has mixed feelings about diversity and female empowerment.

Henry: Robocop is another of your case study films which is undergoing a remake at the current time. You link the original to debates around big business and the current state of urban America under the Reagan administration. What aspects of this story do you think remains relevant in the era of Occupy Wall Street and the Tea Party?

Nick: I certainly do see RoboCop as one of the great movies editorializing on business in the 1980s – right up there with Wall Street. I’ll be fascinated to see how the new RoboCop tackles these subjects. Certainly corporate ethics and privatization remain live issues. It was always interesting to me that RoboCop still needed to imagine that the #1 guy at the corporation was good. I wonder if that will still be the case. Of course RoboCop is an anti-corporate allegory told by a corporation, so they will probably fudge the issue and not have Murphy marching into Congress and demanding the reinstatement of the Glass Stiegel Act or restraints on Wall Street.

Henry: You end the book with a comparison between Science Fiction Cinema and television. So, what do you see as the most important differences in the ways that the genre has fared on the small screen? If you were writing this book on science fiction television, which programs would yield the richest analysis and why?

Nick: There is a symbiotic relationship between SF film and TV. A number of the films we look at can be seen as outgrowths of TV – Quatermass is the most obvious; some use TV expertise – like 2001: A Space Odyssey; some have leant their technology to TV; many have TV spin-offs or imitators – Logan’s Run and Planet of the Apes are cases in point. I think TV tends by its nature to bring everything home, turning everything into a cyclical family drama, whereas film tends to stretch everything to the horizon and emphasize linearity and personal transformation. Both approaches have strengths and weaknesses for SF subjects. I think that there is an intimacy of engagement possible for the audience of a television show which is much harder to create with a one-off film.

As you’ve documented, Henry, at its best television becomes truly imbedded in people’s lives. This is the power of Star Trek or Doctor Who. James and I have both written about Doctor Who elsewhere and there is more to be said. I’ve written a little about the television programs of Gerry Anderson, Thunderbirds and so forth, which have been underserved in the literature thus far. I am fascinated by the imagined future in Anderson’s output, with global governance and institutions: post war optimism traced to the horizon.

James: It's a fascinating question - and one where technological change is important. I'd suggest that in the early days of TV - when most drama was produced live in the studio - TV had the edge over film because the technological limitations meant that it had to focus on ideas and characterization. Hence The Quatermass Experiment and its sequels, arguably, work better on TV than in their cinema remakes. There's also a symbiosis between the form of SF literature and early TV.

Until the mid-twentieth century much of the best SF literature was in the form of short stories rather than novels - this transferred very easily to SF anthology series such as The Twilight Zone and The Outer Limits. That's not a form of TV drama we have today. Since c.2000, however, there's been a vast technological and aesthetic change in the style of TV science fiction. One of the consequences of digital technology in both the film and TV industries has been to blur the distinction between the two media. A lot of TV today looks like film - and vice versa. Certainly TV science fiction has become more 'cinematic' - look at the revisioning of Battlestar Galactica or the new Doctor Who. The visual effects are as good as cinema, while the TV series have adopted the strategy of 'story arcs' that lends them an epic dimension - like the longer stories you can tell in film.

Nick mentions that we've both written, independently, on Doctor Who, and there's certainly more to be said there - and with its spin-offs Torchwood and The Sarah Jane Adventures. It works both as a narrative of British power and as an exploration of Anglo-American relations - themes we cover in the SF Cinema book. I don't know whether we'll go on to write a companion volume on US and UK television science fiction, but if we do there's plenty of scope. The Twilight Zone is a key text, certainly, not least because it employed a number of SF authors to write scripts. The Invaders is an interesting riff on the invasion narrative, a 1950s Cold War paranoia text but made in the 1960s. V is a cult classic - paranoia reconfigured for the 1980s.

In Britain series such as Survivors and Blake's 7 demonstrate again a very dystopian vision of the future. There were also faithful, authentic adaptations of SF literature like The Day of the Triffids and The Invisible Man in the 1980s. Back in the US, series like The Six Million Dollar Man, The Bionic Woman and The Incredible Hulk clearly have things to say about the relationship between science and humanity. I've already mentioned Battlestar Galactica but there are plenty of other examples too: Space: Above and Beyond, Farscape, Firefly, the various Star Trek spin offs. That's the beauty of science fiction - the universe is infinite!

For those who would like to read what Chapman and Cull have had to say about Doctor Who, Here you go:

Nick Cull, 'Bigger on the Inside: Doctor Who as British cultural history.' For Graham Roberts and Philip M. Taylor (eds.), The Historian, Television and Television History (University of Luton Press, 2001), pp. 95-111

Nick Cull. ‘Tardis at the OK Coral,’ in John R. Cook and Peter Wright (eds), British Science Fiction Television: A Hitchhiker’s Guide, (London, I. B. Tauris, 2006), pp. 52-70

Chapman's WhoWatching blog: http://whowatching.wordpress.com/2013/05/21/review-the-name-of-the-doctor/

 

Nick Cull is professor of communication at University of Southern California.  He is a historian whose research focuses on the interface between politics and the mass media.  In addition to well-known books on the history of propaganda he has published widely on popular cinema and television including shorter pieces on Doctor Who, Gerry Anderson and The Exorcist.

James Chapman is professor of film at University of Leicester in the UK.  He is a historian who has specialized in popular film and television.  His work has included book length studies of James Bond, Doctor Who, British Adventure Serials, British Comic Books and British propaganda in the Second World War.  His previous collaboration with Nick Cull was a book on Imperialism in US and British popular cinema.

Projecting Tomorrow: An Interview with James Chapman and Nicholas J. Cull (Part Two)

Henry: As you suggest in your introduction, “futuristic narratives and images of SF cinema are determined by the circumstances of their production.” What relationship do you posit between the ebb and flow of visibility for science fiction films and the evolution of the American and British film industries?

Nick: When we wrote that line we were thinking mainly about the way in which the historical circumstance can be channeled into SF, which is so wonderfully open to addressing contemporaneous issues by allegory (or hyperbole), but I think it can be applied to the film industries or ‘industrial context’ if you will. Cinema is a business and there are clear business cycles at work. While we found that the reputation of SF as a high risk genre which seldom delivered on its promise to producers was exaggerated – we ran into more examples of bland returns than out-and-out ruination – it does seem to have inhibited production somewhat. Production runs in cycles as if producers on both sides feel sure that SF will pay off, take a risk with a high budget film, fail to realize expectations and then back off in disappointment for a couple of seasons.

2001 breaks the cycle and ushers in a SF boom which has yet to end. The successes are so spectacular that they carry the genre over the bumps. The boom made it economically feasible to develop dedicated technologies to create even better films – the story of Industrial Light and Magic is a case in point – and these technologies seem to have been best displayed in a genre which allows or even requires images of the fantastic.

I think SF has now evolved into the quintessential film genre which sells itself based on taking images to new places. There are industrial reasons reinforcing this trend, not the least being that if you make your money from exhibiting something on the big screen you need to seek out stories that are actually enhanced by that treatment. Not every genre lends itself. I doubt there will ever be a British social realist film or the sort done by Ken Loach or Mike Leigh shot in IMAX, though insights from that approach can take SF to new places, witness Attack the Block.

James: The market is also relevant here. Take Things to Come: one of the top ten films at the UK box office in 1936, but the British market alone was insufficient to recoup the costs of production and the film didn't do much business in the United States. Another theme that crops up several times, is that, while Britain no longer has a large film production industry, it does have excellent studio and technical facilities. Hence big Hollywood-backed films like 2001 and Star Wars were shot in British studios with largely British crews. And there are other examples - Alien, Judge Dredd - that we didn't have space to include.

 



Henry: A central emphasis here is in the ways that science fiction responds to popular debates around political and technological change. It’s a cliché that Hollywood had little interest in delivering “messages,” yet historically, science fiction was a genre which sought to explore “ideas,” especially concerns about the future. How did these two impulses work themselves out through the production process? Do you see science fiction cinema as the triumph of entertainment over speculation or do most of the films you discuss make conscious statements around the core themes which SF has explored?

Nick: As I said when thinking about the literary/cinematic transition, I think that messages and ideas can have a hard time in Hollywood and often find themselves being forced out by images. This said, the messages that survive the process are all the more potent. Avatar may have been all about what James Cameron can do with digital 3-D it made important points about indigenous rights and the environment along the way.

James: There've been some honourable and well-intentioned attempts to build SF films around ideas or messages - Things to Come stands out - though I think that in general, and this is true of popular cinema as a whole and not just SF, audiences tend to be turned off by overt political or social messages and prefer their ideas served up within a framework of entertainment and spectacle. Nick's chapter on Star Wars, to take just one example, shows how this film was able to address a range of contemporary issues within a framework of myth and archetypes that resonated with audiences at the time and since. Here, as elsewhere, 2001 is the watershed film - perhaps the only ideas-driven SF film that was also a huge popular success.

Henry: You devote a chapter to the little known 1930s film, Just Imagine, and among other things, note that it is not altogether clear how much Hollywood or audiences understood this to be a science fiction film given its strong ties to the musical comedy as a genre. What signs do we have about the role which these genre expectations played in shaping the production and reception of Just Imagine?

 

Nick: Neither the producers nor audience of Just Imagine had much idea what was going on generically. First of all the production team were a re-assembly of the group who had worked on the studio’s boy-meets-girl hit Sunny Side Up and all their credentials were in musical comedy; secondly the critics who saw the film had trouble finding terminology to describe the film. They tended towards terms like ‘Fantasy’ and drew parallels with The Thief of Baghdad rather than Metropolis. Finally there was the question of law suits as sundry writers claimed that elements we now think of as common points of the genre such as space flight to Mars were original to them. Courts were unimpressed.

Henry: Things to Come is one of those rare cases where a literary SF writer -- in this case, H.G. Wells -- played an active role in shaping the production of a science fiction film. What can you tell us about the nature of this collaboration and was it seen as a success by the parties involved?

James: It's a fascinating, and complex, story. This one film exemplifies perfectly the tension between ideas and spectacle that runs throughout the history of SF cinema. Wells was contracted by Alexander Korda, Britain's most flamboyant film producer, and the closest that the British industry ever had to one of the Hollywood 'movie moguls', to develop a screenplay from his book The Shape of Things to Come. Wells was interested because, unlike many writers, he believed in the potential of cinema as a medium for exploring ideas and presenting his views to a wider public.

From Korda's perspective, Wells was a 'name' whose involvement attached a degree of intellectual prestige to the film. But there were two problems. The first was that Wells tried to exercise control over all aspects of the production, even to the extent of dictating memos on what the costumes should look like - which Korda was not prepared to allow. The second problem was that The Shape of Things to Come - an imaginative 'history of the future' - is not a very cinematic book: no central characters, for example, or big set pieces. So a new story had to be fashioned.

Some aspects of Wells's vision were lost in the process. For example, the book is critical of organised religion, but the British Board of Film Censors frowned upon any criticism of the Church as an institution - so that theme goes by the wayside. And Wells's book posits the notion that a well-intentioned technocratic dictatorship - he calls it the 'Puritan Tyranny' - would be beneficial for solving the problems of the world. Again this is significantly downplayed in the film.

So there were a lot of compromises. The collaboration is perhaps best described as one of creative tensions. Publically Wells spoke warmly of Korda and his collaboration with director William Cameron Menzies (an American, incidentally, referring back to our previous discussion of Anglo-American contexts). But privately he was profoundly disappointed by the finished film and was scathing about Menzies, whom he described as "incompetent". In the end, Things to Come is one of those cases where the finished film reveals traces of the problematic production. For Wells it was about the ideas, for Korda it was about the spectacle - but the two are not really reconciled into a wholly satisfying experience.

 

 

Nick Cull is professor of communication at University of Southern California.  He is a historian whose research focuses on the interface between politics and the mass media.  In addition to well-known books on the history of propaganda he has published widely on popular cinema and television including shorter pieces on Doctor Who, Gerry Anderson and The Exorcist.

James Chapman is professor of film at University of Leicester in the UK.  He is a historian who has specialized in popular film and television.  His work has included book length studies of James Bond, Doctor Who, British Adventure Serials, British Comic Books and British propaganda in the Second World War.  His previous collaboration with Nick Cull was a book on Imperialism in US and British popular cinema.

Projecting Tomorrow: An Interview with James Chapman and Nicholas J. Cull (Part One)

  The recently published Projecting Tomorrow: Science Fiction and Popular Film offers vivid and thoughtful case studies that consider the production and reception of key British and American science fiction movies, including Just Imagine (1930), Things to Come (1936), The War of the Worlds (1953), The Quatermass Experiment and its sequels (1955), Forbidden Planet (1956), 2001: A Space Odyssey (1968), Planet of the Apes (1968), The Hellstrom Chronicles (1971), Logan's Run (1976), Star Wars (1977), RoboCop (1987), and Avatar (2009). I very much enjoyed the background that Chapman and Cull provided on these films. Even though I was familiar with each of these films already, I managed to learn something new in every chapter. The authors did a masterful job  in the selection of examples -- a mix of the essential and the surprising -- which nevertheless manage to cover many of the key periods in the genre's evolution on the screen.  They make a strong case for why SF films need to be considered in their own right and not simply as an extension of the literary version of the genre.  Chapman and Cull are long-time SF fans, but they also bring the skills of an archival historian and expertise in global politics to bear on these rich case studies.   All told, I suspect this book is going to be well received by fans and academics alike.

I have gotten to know Cull, who is a colleague of mine here at the Annenberg School of Communications and Journalism, through hallway and breakroom conversations about our mutual interests in Doctor Who and a range of other cult media properties, and I was delighted to have some interplay with Chapman when he visited USC a year or so back. I am therefore happy this week to be able to share with you an interview with the two authors who hits at some of the key themes running through Projecting Tomorrow.

 

Henry: Let me ask you a question you pose early in your introduction: “Why has SF literature been so poorly served by the cinema?” Perhaps, we can broaden out from that and ask what you see as the relationship between science fiction literature and film. Why do the differences in the media and their audiences result in differences in emphasis and focus?

Nick: This is an excellent question. My sense is that SF literature has tended to serve divergent objectives to SF film. I am taken by the British novelist/critic Kingsley Amis’s observation fifty years ago that the idea is the hero in literary science fiction. My corollary to that is that the image is the hero in SF cinema. Cinema by its nature emphasizes image over ideas and all the more so as the technology to generate ever more spectacular images has increased.

James: I think there's also a sense in which SF literature has always been a slightly niche interest - popular with its readership, yes, but generally not best-seller levels of popular. SF cinema, in contrast, is now a mainstream genre that has to serve the needs of the general cinema-going audience as well as genre fans. Hence the charge from SF readers that cinema by and large doesn't do SF very well - that the need to attract a broad audience (because of the expense of the films) leads to a diluting of the 'ideas' aspect of SF in literature. One of the themes we track in the book is the process through which SF went from being a marginal genre in cinema to becoming, from the 1970s, a major production trend.

 



Henry: What criteria led to the selection of the case studies you focus upon in Projecting Tomorrow?

Nick: We chose films that could both represent the SF cinema tradition on both sides of the Atlantic and illuminate a range of historical issues. We needed films that had a good supply of archive material to which we could apply our historical research methods, and all the better if that material had hitherto escaped scholarly analysis. We wanted the milestones to be present but also some surprise entries too. There were some hard choices. We doubted there was anything really new to say about Blade Runner so that proposed chapter was dropped. I was keen to write about Paul Verhoeven’s Starship Troopers but was unable to locate sufficient archive material for a historical approach. It was during that search that I found the treasure trove of material from Verhoeven’s RoboCop and decided to write about that instead. One of the Star Trek films and Jurassic Park were also late casualties from the proposal. There are some surprise inclusions too. We both find the combination of genres a fascinating phenomenon and hence included The Hellstrom Chronicle, which grafts elements of SF onto the documentary genre and managed to spawn a couple of SF projects in the process.

James: The selection of case studies was a real problem for this book, as SF is such a broad genre in style and treatment, and there are so many different kinds of stories. We wanted to have broad chronological coverage: the 'oldest' film is from 1930 (Just Imagine) and the most recent is 2009 (Avatar). It would have been possible to write a dozen case studies focusing on a single decade - the 1950s, for example, or the 1970s, both very rich periods for SF cinema - but we felt this would have been less ambitious and would not have enabled us to show how the genre, and its thematic concerns, have changed and evolved over time. Beyond that, Nick and I are both historians by training, and we wanted examples where there was an interesting story behind the film to tell. Logan's Run, for example, is a case where the production history is in certain ways more interesting than the finished film: George Pal had wanted to make it in the late 1960s as a sort of 'James Bond in Tomorrowland' but for various reasons it didn't happen then, and when it was finally made, in the mid 1970s, the treatment was more serious (and perhaps portentous). Some films selected themselves: we could not NOT have milestones like Things to Come and 2001: A Space Odyssey - and in the latter case the Stanley Kubrick Archive had recently been opened to researchers and so there were new primary sources available. I wanted to include Dark Star, a sort of spoof response to 2001, but there wasn't much in the way of archive sources and the background to the film is quite well known - and in any event we already had plenty of other case studies from the 1970s. In the end, although we had to leave out some important films, like Invasion of the Body Snatchers (I'd simply refer readers to Barry Keith Grant's excellent study of this film in the British Film Institute's 'Film Classics' series), this meant we could find space for some forgotten films, such as Just Imagine, and for some that are probably less familiar to US audiences, such as The Quatermass Experiment.

Henry: You have made a conscious choice here to include British as well as American exemplars of science fiction. How would you characterize the relationship between the two? In what ways do they intercept each other? How are the two traditions different?

Nick: British and American SF and culture more widely are thoroughly intertwined. The sad truth is that US corporate culture tends to homogenize so I think it helps to have the UK bubbling along across the pond as a kind of parallel universe in which different responses can emerge and save the creative gene pool from in-breeding. SF cinema has seen some great examples of this Anglo-America cross fertilization process. 2001: A Space Odyssey is a terrific example of that. If I had to essentialize the difference between the two approaches, I’d say that Britain is a little more backward looking (anticipating Steam Punk) and the US has been more comfortable with a benign military presence. Today the two traditions have become so interlinked that it is very difficult to disengage them, but they seem to be good for each other.

James: The Anglo-American relationship was also something we'd explored in our first book together, Projecting Empire, where we found there were strong parallels in the representation of imperialism in Hollywood and British cinema. In that book we have two case studies by Nick, on Gunga Din and The Man Who Would Be King, showing how a British author, Rudyard Kipling, met the ideological needs of American film-makers. The equivalent of Kipling for science fiction is H.G. Wells, a British author widely adapted, including by Hollywood - and again we have two case studies of Wellesian films. If I were to generalize about the different traditions of US and UK science fiction - and this is a gross over-simplification, as there are numerous exceptions - it would be that by and large American SF movies have held to a generally optimistic view of the future whereas British SF, certainly since the Second World War, has been more pessimistic. This might reflect the contrasting fortunes of the two nations since the mid-twentieth century - American films expressing the optimism and confidence of the newly emergent superpower, British films coming to terms with the slow decline of a former imperial power. But, as I said, this is an over-simplification. Planet of the Apes, for example, has a very dystopian ending (though later films in the series are more optimistic in suggesting the possibility of peaceful future co-existence), whereas Doctor Who (albeit from television) is an example of British SF with a generally positive outlook on the future.

Nick Cull is professor of communication at University of Southern California.  He is a historian whose research focuses on the interface between politics and the mass media.  In addition to well-known books on the history of propaganda he has published widely on popular cinema and television including shorter pieces on Doctor Who, Gerry Anderson and The Exorcist.

James Chapman is professor of film at University of Leicester in the UK.  He is a historian who has specialized in popular film and television.  His work has included book length studies of James Bond, Doctor Who, British Adventure Serials, British Comic Books and British propaganda in the Second World War.  His previous collaboration with Nick Cull was a book on Imperialism in US and British popular cinema.

How to Watch Television: The Walking Dead

Today, I want to showcase the launch of an exciting new book, How to Watch Television, edited by Ethan Thompson and Jason Mittell. The editors recognized a gap in the field of television studies between the kinds of essays we ask our students to write (often close readings focused on specific episodes) and the kinds of exemplars we provide them from scholarly publications (often theory-dense, focused on making much larger arguments, and making moves which it is hard for undergrads or early graduate students to match). Contributors, myself among them, were asked to focus on specific episodes of specific programs, to do a close analysis with limited amounts of fancy theoretical footwork, and to demonstrate the value of a particular analytic approach to understanding how television works. Thompson and Mittell brought together a who's who of contemporary television studies writers and encouraged them to write about a broad array of programs. You can get a sense of the project as a whole by reading the table of contents. I have only read a few of the essays so far, having just recently gotten my author's copy, but so far, the book more than lives up to its promise. I. TV Form: Aesthetics and Style

1. Homicide: Realism – Bambi L. Haggins

2. House: Narrative Complexity – Amanda D. Lotz

3. Life on Mars: Transnational Adaptation – Christine Becker

4. Mad Men: Visual Style – Jeremy G. Butler

5. Nip/Tuck: Popular Music – Ben Aslinger

6. Phineas & Ferb: Children’s Television – Jason Mittell

7. The Sopranos: Episodic Storytelling – Sean O’Sullivan

8. Tim and Eric’s Awesome Show, Great Job!: Metacomedy – Jeffrey Sconce

II. TV Representations: Social Identity and Cultural Politics

9. 24: Challenging Stereotypes – Evelyn Alsultany

10. The Amazing Race: Global Othering – Jonathan Gray

11. The Cosby Show: Representing Race – Christine Acham

12. The Dick Van Dyke Show: Queer Meanings – Quinn Miller

13. Eva Luna: Latino/a Audiences – Hector Amaya

14. Glee/House Hunters International: Gay Narratives – Ron Becker

15. Grey’s Anatomy: Feminism – Elana Levine

16. Jersey Shore: Ironic Viewing – Susan J. Douglas

III. TV Politics: Democracy, Nation, and the Public Interest

17. 30 Days: Social Engagement – Geoffrey Baym and Colby Gottert

18. America’s Next Top Model: Neoliberal Labor – Laurie Ouellette

19. Family Guy: Undermining Satire – Nick Marx

20. Fox & Friends: Political Talk – Jeffrey P. Jones

21. M*A*S*H: Socially Relevant Comedy – Noel Murray

22. Parks and Recreation: The Cultural Forum – Heather Hendershot

23. Star Trek: Serialized Ideology – Roberta Pearson

24. The Wonder Years: Televised Nostalgia – Daniel Marcus

IV. TV Industry: Industrial Practices and Structures

25. Entertainment Tonight: Tabloid News – Anne Helen Petersen

26. I Love Lucy: The Writer-Producer – Miranda J. Banks

27. Modern Family: Product Placement – Kevin Sandler

28. Monday Night Football: Brand Identity – Victoria E. Johnson

29. NYPD Blue: Content Regulation – Jennifer Holt

30. Onion News Network: Flow – Ethan Thompson

31. The Prisoner: Cult TV Remakes – Matt Hills

32. The Twilight Zone: Landmark Television – Derek Kompare

V. TV Practices: Medium, Technology, and Everyday Life

33. Auto-Tune the News: Remix Video – David Gurney

34. Battlestar Galactica: Fans and Ancillary Content – Suzanne Scott

35. Everyday Italian: Cultivating Taste – Michael Z. Newman

36. Gossip Girl: Transmedia Technologies – Louisa Stein

37. It’s Fun to Eat: Forgotten Television – Dana Polan

38. One Life to Live: Soap Opera Storytelling – Abigail De Kosnik

39. Samurai Champloo: Transnational Viewing – Jiwon Ahn

40. The Walking Dead: Adapting Comics – Henry Jenkins

You can order it at the NYU Press website, along with previewing the introduction or requesting a review copy for faculty thinking about adopting it in a class. You can also order it on Amazon. Or please request it at an independent bookstore near you, if you’ve got one.

Thompson and Mitell have shrewdly offered those of us who have blogs the chance to share our own essays from the collection with the idea of helping to build up the buzz around this promising release.  Spreadability at work! So, I am happy to share today my musings about The Walking Dead, written after the end of Season 1. (Don't get me started about the speed of academic publishing: by normal standards, this one had a pretty rapid turnaround, but we still lag behind any other mode of publication. This is why I so value sites like Flow,In Media Res, and Antenna.)

 

The Walking Dead: Adapting Comics

Henry Jenkins

Abstract: One of the key ways that television connects to other media is by adapting pre-existing properties from films, comics, and other formats. Henry Jenkins uses one of the most popular of such recent adaptations, The Walking Dead, to highlight the perils and possibilities of adaptations, and how tapping into pre-existing fanbases can pose challenges to television producers.

The comic book industry now functions as Hollywood's research and development department, with a growing number of media properties inspired by graphic novels, including not only superhero films (Green Lantern, X-Men: First Class, Thor) and both live-action and animated television series (Smallville, The Bold and the Brave), but also films from many other genres (A History of Violence, American Splendor, 20 Days of Night, Scott Pilgrim Vs. the World). There are many possible explanations for Hollywood’s comic book fixation:

 

1. DC and Marvel are owned by Warner Brothers and Disney, respectively, who cherry pick what they think will satisfy mass audience interests.

 

2. Comics-based stories are to contemporary cinema what magazine short stories were to Classical Hollywood—more or less presold material.

 

3. Hardcore comics readers fall into a highly desired demographic—teen and twentysomething males—who have abandoned television in recent years for other media.

 

4. Comic books are a visual medium, offering something like a storyboard establishing basic iconography and visual practices to moving image media.

 

5. Digital special effects have caught up to comic’s most cosmic storytelling, allowing special effects houses to expand their technical capacities.

 

6. Contemporary television and comics involve a complex mix of the episodic and the serial, deploying long-form storytelling differently from most films or novels.

 

7. The streamlined structure of comics offer emotional intensification closely aligned with contemporary screen practices.

 

Despite such claims, comic adaptations often radically depart from elements popular with their original comics-reading audience. Mainstream comics readership has been in sharp decline for several decades: today’s top-selling title reaches fewer than a hundred thousand readers per month—a drop in the bucket compared with the audiences required for cable success, let alone broadcast networks. Some graphic novels have moved from specialty shops to chain bookstores, attracting a “crossover” readership, including more women and more “casual” fans. Adapting a comic for film or television often involves building on that “crossover” potential rather than addressing hardcore fans, stripping away encrusted mythology.

AMC's The Walking Dead (2010-present) is a notable exception, establishing its reputation as "faithful" to the spirit if not the letter of the original, even while introducing its original characters, themes, and story world to a new audience. Robert Kirkman’s comic series was a key example of the crossover readership graphic novels can find at mainstream bookstores. Kirkman has freely acknowledged his debts to George Romero’s Living Dead films, while others note strong parallels with 28 Days Later. The Walking Dead’s success with crossover readers and Kirkman’s reliance on formulas from other commercially successful franchises in the genre explain why producers felt they could remain “true” to the comics while reaching a more expansive viewership.

Using “Wildfire,” the fifth episode from The Walking Dead’s first season, I will explore what aspects of the comic reached television, what changes occurred, and why hardcore fans accepted some changes and not others. As a longtime Walking Dead reader, I am well situated to explore fan response to shifts in the original continuity.

To understand what The Walking Dead meant to comics readers, one might well start with its extensive letter column. Here, dedicated fans ask questions and offer opinions about every major plot development. Kirkman established a deeply personal relationship with his fans, sharing behind the scenes information about his efforts to get the series optioned and then developed for television, responding to reader controversies, and discussing the comic’s core premises and genre conventions (“the rules”). Kirkman summarized his goals in the first Walking Dead graphic novel:

With The Walking Dead, I want to explore how people deal with extreme situations and how these events CHANGE them.... You guys are going to get to see Rick change and mature to the point that when you look back on this book you won’t even recognize him....I hope to show you reflections of your friends, your neighbors, your families, and yourselves, and what their reactions are to the extreme situations on this book... This is more about watching Rick survive than it is about watching Zombies pop around the corner and scare you.....The idea behind The Walking Dead is to stay with the character, in this case, Rick Grimes for as long as is humanly possible....The Walking Dead will be the zombie movie that never ends.[1]

If, as Robin Wood formulated, the horror genre examines how normality is threatened by the monstrous, Kirkman’s focus is less on the monstrous and more on human costs.[2] The comic’s artwork (originally by Tony Moore but mostly by Charlie Adlard) offers gorehounds detailed renderings of rotting faces (lovingly recreated for the television series by makeup artist Greg Nicotero) and blood splattering as humans and zombies battle, but it is also focused on melodramatic moments, as human characters struggle to maintain normality in the face of the monstrous. This merger of horror and melodrama may explain why, despite its gore, The Walking Dead comics appeal almost as much to female readers as it does to the men who constitute the core comics market. Early on, some fans criticized the comic’s shambling “pace,” going several issues without zombie encounters. However, once they got a taste of Kirkman’s storytelling, many realized how these scenes contributed to the reader’s deeper investment in the characters’ plights.

Given his intimate and ongoing relationship with readers, Kirkman’s participation as an executive producer on the television adaptation was key for establishing credibility with his long-term readers. Series publicity tapped Kirkman’s street cred alongside AMC’s own reputation for groundbreaking, character-focused television dramas (Mad Men, Breaking Bad) and the reputations of executive producers Frank Darabont (The Green Mile, The Shawshank Redemption) and Gale Anne Hurd (Aliens, The Abyss, The Terminator franchise) with filmgoers, establishing an aura of exceptionality.

The Walking Dead was a key discussion topic at the 2010 San Diego Comic-Con, a gathering of more than 200,000 influential fans. Posters, specifically produced for the convention, compared the television characters with their comic book counterparts. The trade room display reconstructed an iconic comic location, a farmhouse where a family had killed themselves rather than change into zombies. Both tactics reaffirmed that the series was closely based on the comics. And Kirkman was front and center, promising fans the series would capture the essence of his long-articulated vision. If the producers won the hearts of the hardcore fans, they might count on them to actively rally viewers for the series premiere. Thanks, in part, to the fan support in spreading the word and building enthusiasm, The Walking Dead broke all ratings records for basic cable for its debut episode and broke them again with the launch of Season 2.

By the time The Walking Dead reached the air, Kirkman had produced and published 12 full-length graphic novels, representing more than 70 issues. Yet, the first season of the television series only covered the first six issues. On the one hand, this expansive narrative offered a rich roadmap. On the other, it threatened to lock the producers down too much, making it hard for the series to grow on its own terms. Speaking at the Paleyfest in Los Angeles after season one, Kirkman acknowledged that exploring different paths through the material allowed him to explore roads not taken in his own creative process.

The challenge was to give veteran fans recognizable versions of the established characters and iconic moments. Fans must be able to follow the story structure in broad outlines, even as the producers were changing major and minor plot points, adding new themes and character moments. The audience anticipated that any changes would be consistent with Kirkman’s oft-articulated “ground rules” and yet the producers wanted the freedom to take the story in some new directions. The Walking Dead had built its reputation for surprising its readers in every issue—any character could die at any moment and taboos could be shattered without blinking an eye. How could the television series have that same impact if the most dedicated fans already knew what was going to happen next?

“Wildfire” was perhaps season one’s most emotionally powerful episode, where many core themes came into sharpest focus. It was based upon the final chapter of the first graphic novel, which set the tone for the rest of the comics series. The episode includes several memorable moments from the comics, specifically the death of two major characters (Amy and Jim), yet also several shifts that hinted at how dramatically the producers had revised things. Fans embraced some of these changes, while others violated their collective sense of the franchise.

As “Wildfire” opens, the protagonists are recovering from a traumatic and abrupt zombie attack that killed several recurring characters and forced the survivors to confront the vulnerability of their encampment, preparing them to seek a new “home” elsewhere, a recurring quest in the comics. The attack’s basic outline remains consistent with the graphic novel. For example, Amy gets caught by surprise when she separates from the others, while Jim gets chomped in the ensuing battle. The brutal attack disrupts a much more peaceful “fish fry” scene, which provides an excuse for characters to reveal bits of their backstory. The ruthless battle shows how each character has begun to acquire self-defense and survival skills.

Yet, a central emotional incident, Andrea’s prolonged watch over her dead sister Amy’s body, occupied only two panels of Kirkman’s original comic. There, Andrea tells Dale, “I can’t let her come back like that,” capturing the dread of a loved one transforming into the undead. The television series used this line as a starting point for a much more elaborated character study, built across several episodes as the two sisters, a decade-plus apart in age in this version (though not in the original), offer each other physical and emotional support. The two sisters talk in a boat about the family tradition of fishing and how their father responded to their different needs. Andrea plans to give Amy a birthday present, telling Dale that she was never there for her sister’s birthdays growing up. The image of Andrea unwrapping the present and hanging the fishing lure around her dead sister’s neck represents the melodramatic payoff fans expect from The Walking Dead in whatever medium. The expansion of this incident into a prolonged melodramatic sequence has to do both with issues of modality (the range of subtle facial expressions available to a performer working in live action as opposed to the compression required to convey the same emotional effect through static images) and AMC’s branding as  network known for “complex narratives,” “mature themes,” and “quality acting.”

 “Wildfire” shows Andrea protecting Amy’s body as the others seek to convince her to allow her sister to be buried, We hear the sounds of picks thrashing through the skulls of other zombies in the background and watch bodies being prepared to burn. And, finally, Amy returns to life for a few seconds. Andrea looks intently into Amy’s eyes, looking for any signs of human memory and consciousness, stroking her sister’s face as Amy’s gasps turn into animalistic grunts. The producers play with these ambiguities through their use of makeup: Amy is more human-looking compared to the other zombies, where the focus is on their bones, teeth and muscle rather than their eyes, flesh and hair. In the end, Andrea shoots her sister with the pistol she’s been clutching, an act of mercy rather than violence.

Much of the sequence is shot in tight close-ups, focusing attention all the more closely on the character’s reactions. This is the first time the television series has shown us humans transition into zombies. Several issues after this point in the story (issue 11), the comic revisits this theme with a troubling story of Hershel, a father who has kept his zombie daughter chained and locked in a barn, unable to accept the irreversibility of her fate (an incident which was enacted on screen near the climax of the series’s second season). Here, Andrea’s willingness to dispatch Amy is a sign of her determination to live.

The comic explores Jim’s death, by contrast, in more depth. Jim’s family had been lost in a previous zombie attack: Jim was able to escape because the zombies were so distracted eating his other family members. The book’s Jim is a loner who has not forged intimate bonds with the others, but who aggressively defends the camp during the zombie attack. In the comic, Jim is so overwrought with guilt and anger that he smashes one zombie’s skull to a pulp. In the television series, this action is shifted onto an abused wife who undergoes a cathartic breakdown while preparing her dead husband for burial. On the one hand, this shift gave a powerful payoff for a new subplot built on the comic’s discussion of how the zombie attacks had shifted traditional gender politics and on the other, it allowed a tighter focus on Jim’s slow acceptance of the prospect of becoming a zombie.

In both media, Jim initially hides the reality of being bitten from the other campers. Finally, he breaks down when someone notices his wounds. While the producers used the comic as a visual reference for this iconic moment, there are also substantial differences in the staging, including the shift of the bite from Jim’s arm to his stomach and the ways the other campers manhandle him to reveal the bite.

Jim’s story conveys the dread with which a bitten human begins preparing for a transformation into a zombie. In both the comic and the television series, Jim asks to be left, propped up against a tree so that he might rejoin his family when the inevitable change comes. Here, again, the television series elaborates on these basic plot details, prolonging his transformation to show the conflicting attitudes of the other campers to his choice. The television series is far more explicit than the comic about parallels with contemporary debates about the right of the terminally ill to control the terms of their own death.

In both sets of changes highlighted here, the television series remains true to the spirit of the original comic if not to the letter—especially in its focus on the processes of mourning and loss and the consequences of violence, both often overlooked in traditional horror narratives. Both represent elaborations and extensions of elements from the original book. And both link these personal narratives with the community’s collective experience, as in the scene where many from the camp say goodbye to Jim as he lies against a tree awaiting his fate. Some offer him comfort, others walk past unable to speak.

On the other hand, two other “Wildfire” plotlines represent more decisive breaks with the comics—the confrontation between Shane and Rick and the introduction of the Center for Disease Control. Rick had been cut off from his wife and son when Shane, his best friend, helped them escape, while Rick was lying comatose in the hospital. Believing Rick dead, Laurie and Shane couple until Rick finds his way back to his family. In Kirkman’s original, Dale warns Rick that Shane made advances on Laurie. In the television series, Rick has no idea of the potential infidelity, but the audience knows that Shane and Laurie have made love. In the graphic novel, the two men go out to the woods to have it out. In the final panels of the first graphic novel, Shane attempts to kill Rick and is shot in the head by Rick’s 8-year-old son, Carl. The boy collapses in his father’s arms and says, “It’s not the same as killing the dead ones, Daddy.” Rick responds, “It never should be, Son. It never should be.”

In “Wildfire,” tension mounts throughout the episode as the two men clash over what the group should do next. Both turn to Laurie for moral support, which she is unable to offer, instead saying, “Neither one of you were entirely wrong.” In the television version, Shane initially mistakes Rick for a deer in the woods until he has his friend in his gun sights and then finds himself unable to draw down. Dale, rather than Carl, comes upon the two men, ending Shane’s moral dilemma. When he returns from the woods, Shane seems ready to accept Rick’s leadership. Shane’s survival represents a decisive shift from the original, though by the season’s end, its ramifications were not clear. Perhaps this is a case where Kirkman saw unrealized potentials that, given a chance, he wanted to mine more deeply.

But, in removing Carl from the scene, the television producers could be accused of pulling punches, given how central the sequence of the young boy shooting the adult male (and its refusal to engage in sentimental constructions of childhood innocence) had been in the comic’s initial reception. Carl’s repeated brushes with violence, and his willingness to take action when adults hesitate, is a recurring motif throughout the books. If the comics often shocked readers by abruptly killing off long established characters, here the producers surprised some viewers by refusing to kill a character whose death represented an early turning point in the comics.

The visit to the Center for Disease Control, which is introduced in the closing scenes of “Wildfire” and becomes the focus for the season’s final episode, “TS-19,” has no direct counterpart in the comic book series. One of the hard and fast rules Kirkman established in the comics was that he was never going to provide a rational explanation for how the zombie outbreak occurred. As Kirkman argues in an early letter column:

 

As far as the explanation for the zombies go, I think that aside from the zombies being in the book, this is a fairly realistic story, and that’s what makes it work. The people do real things, and it’s all very down to Earth... almost normal. ANY explanation would be borderline science fiction... and it would disrupt the normalness. In my mind, the story has moved on. I’m more interested in what happens next then what happened before that caused it all.[3]

One reason Kirkman has Rick in a coma at the comic series start is so that the audience is not exposed to the inevitable theorizing which would surround a society coping with such a catastrophe. ( A web series, produced for the launch of the second season, further explored what had happened when Rick was in his coma, offering a range of contradictory possible explanations for the zombie epidemic.)

Many fans were anxious about the introduction of the CDC subplot, which implied a medical explanation. At the same time, the closing scenes at the CDC also represent the first time we’ve cut away from Rick or the other members of his party to see another perspective on the unfolding events (in this case, that of a exhausted and suicidal scientist). For both reasons, many fans saw this subplot as another dramatic break with the spirit of the comic.

And it came at an unfortunate moment—at the end of the abbreviated first season, as the last taste before an almost year-long hiatus. If the series’ publicity and presentation had largely reassured long time readers that the series would follow the established “rules,” these final developments cost the producers some hard-won credibility, especially when coupled with news that the production company had fired most of the staff writers who worked on the first season, that AMC was reducing the budget per episode for the series, and that producer Frank Darbout was also leaving under duress.

By this point, The Walking Dead was the biggest ratings success in AMC’s history, leaving many comics fans to worry whether their support was still necessary for the series’ success. It would not be the first time that a series acknowledged a cult audience’s support only long enough to expand its following, and then pivoted to focus on the new viewers who constituted the bulk of its rating points.

As this Walking Dead example suggests, there is no easy path for adapting this material for the small screen. There are strong connections between the ways seriality works in comics and television, but also significant differences that make a one-to-one mapping less desirable than it might seem. Television producers want to leave their own marks on the material by exploring new paths to occasionally surprise their loyal fans. The challenge is how to make these adjustments consistent not with the details of the original stories, but with their “ground rules,” the underlying logic, and one good place to look to watch this informal “contract” between reader and creators take shape is through the letter columns published in the back of the comics. It is through this process that the producers can help figure out what they owe to the comics and to their readers.

Further Reading

Gordon, Ian, Mark Jancovich, and Matthew P. McAllister, eds.  Film and Comic Books 

(Jackson:  University Press of Mississippi, 2007)

Jones, Matthew T.  Found in Translation:  Structural and Cognitive Aspects of the Adaptation of Comic Art to Film (Saarbrücken:  VDM Verlag, 2009)

Pustz, Matthew. Comic Book Culture: Fan Boys and True Believers (Jackson, MI: University Press of Mississippi, 2000).

 


[1] Robert Kirkman, The Walking Dead Vol. 1: Days Gone By (New York: Image, 2006).

[2] Robin Wood, “An Introduction to the American Horror Film,” in Bill Nichols (ed.) Movies and Methods vol. 2 (Berkeley: University of California Press, 1985), 195-220.

[3] Robert Kirkman, “Letter Hacks,” The Walking Dead 8, July 2004.

 

 

Guerrilla Marketing: An Interview with Michael Serazio (Part Two)

You make an interesting argument here that today’s guerrilla advertising represents the reverse of the culture jamming practices of the 1980s and 1990s, i.e. if culture jamming or adbusting involved the highjacking of Madison Avenue practices for an alternative politics, then today’s branding often involves the highjacking of an oppositional stance/style for branding purposes. Explain.  

There have been various examples that have popped up here and there that hint at this hijacking: Adbusters magazine’s apparent popularity with ad professionals; PBR’s marketing manager looking to No Logo for branding ideas; heck, AdAge even named Kalle Lasn one of the “ten most influential players in marketing” in 2011.  Similarly, you see this subversive, counterculture ethos in the work of Crispin Porter + Bogusky, the premier ad shop of the last decade.  But I think the intersection goes deeper than these surface ironies and parallels.  There’s something about the aesthetics and philosophy of culture jamming that contemporary advertising finds enticing (especially when trying to speak to youth audiences): It resonates a disaffection with consumer culture; a streetwise sensibility; and so on.  For culture jammers, such stunts and fonts like flash mobs and graffiti art are political tools; for advertisers, they’re just great ways to break through the clutter and grab attention.  More abstractly, culture jammers see branding as an elaborate enterprise in false consciousness that needs to be unmasked toward a more authentic lived experience; guerrilla marketers, on the other, simply see culture jamming techniques as a way of reviving consumers from the “false conscious” of brand competitors.  Think different, in that sense, works equally well as an Apple slogan and a culture-jamming epigram.

 

You cite one advertising executive as saying, “friends are better at target marketing than any database,” a comment that conveys the ways that branding gets interwoven with our interpersonal relationships within current social media practices. What do you see as some of the long-term consequences of this focus on consumer-to-consumer marketing?

 

In a sense, the whole book – and not merely the friend-marketing schemes – is an exploration of how commercial culture can recapture trust amidst rampant consumer cynicism.  That’s what drives guerrilla marketing into the spaces we’re seeing it: pop culture, street culture, social media, and word-of-mouth.  These contexts offer “authenticity,” which advertisers are ever desperate to achieve given their fundamental governmental task is actually the polar opposite: contrivance.  (Sarah Banet-Weiser’s new book offers a sophisticated analysis of this fraught term across wide-ranging contexts in this regard.)  As far as long-term consequences go, I think it’s important to keep in mind the complicity of consumers in this whole process: In other words, being a buzz agent is still just a voluntary thing.  It’s not like these participants are being duped or exploited into participating.  It’s worth accounting for that and asking why shilling friends is acceptable in the first place.  Is it because of some kind of “social capitalism” wherein we already think of ourselves in branding terms and use hip new goods to show we’re in the marketplace vanguard?  The book is, of course, only a study of marketers not consumers, so it’s pure conjecture, but I think understanding that complicity is key to any long-term forecast of these patterns’ effects on our relationships and culture.

 

Both of our new books pose critiques of the concept of “the viral” as they apply to advertising and branding, but we come at the question from opposite directions. What do you see as the core problems with the “viral” model?

 

From my perspective, there’s an implicit (and not necessarily automatically warranted) populism that accompanies the viral model and label.  Viral success seems to “rise up” from the people; it has a kind of grassroots, democratic, or underground ethos about it.  In some cases, this is deserving, as we see when some random, cheap YouTube video blows up and manages to land on as many screens and in front of as many eyeballs as a Hollywood blockbuster which has all the promotional and distribution machinery behind it.  And because viral is supposedly underdog and populist, it’s “authentic,” so advertisers and brands naturally gravitate toward it, which, for me, makes it an intriguing object of study.  Abstractly speaking, that, too, is at the heart of the book’s inquiry and critique: The masquerades and machinations of powerful cultural producers (like advertisers) working through surrogate channels (like viral) that exude that authentic affect in different guises (here, populist).  Again, this is not to invalidate the genuine pluckiness of a “real” viral hit; it’s simply to keep watch on efforts to digitally “astroturf” that success when they show up.

 

While this blog has often treated what I call “transmedia storytelling” or what Jonathan Gray discusses as “paratexts” sympathetically as an extension of the narrative experience, you also rightly argue that it is an extension of the branding process. To what degree do you see, say, alternate reality games as an extension of the new model of consumption you are discussing in this book? Does their commercial motives negate the entertainment value such activities provide?

 

Oh, certainly not – and I should clarify here that I’m by no means taking the position that commercial motives necessarily negate the pleasure or creativity of participatory audiences.  Alternate reality games (or alternate reality marketing, as I call it) are, in a sense, the fullest extension of many of these practices, themes, and media platforms scattered throughout the book.  They feature outdoor mischief (e.g., flash mob-type activities) and culture jamming-worthy hoaxes, seek to inspire buzz and social media productivity from (brand) communities, and, above all, seem to be premised upon “discovery” rather than “interruption” in the unfolding narrative.  And the sympathetic treatments of their related elements (transmedia storytelling, paratexts) are assuredly defensible.  But they are, also, advertising – and, for my purposes here, they’re advertising that tries not to seem like advertising.  And, again, I believe in that self-effacement, much is revealed about today’s cultural conditions.

 

You end the book with the observation that “more media literacy about these guerrilla efforts can’t hurt.” Can you say more about what forms of media literacy would be desirable? What models of media change should govern such efforts? What would consumers/citizens need to know in order to change their fates given the claims about structure and agency you make throughout the book?

 

I suppose I end the book on a lament as much as a diatribe.  I’m not an abject brand-hater and I hope the book doesn’t come off that way.  That said, I certainly do empathize with the myriad critiques of advertising mounted over the years (i.e., its divisive designs on arousing envy, its ability to blind us to the reality of sweatshop labor, its unrealistic representation of women’s bodies, etc.).  The media literacy I aim for is awareness that these commercial forms are (often invisibly) invading spaces that we have not traditionally been accustomed to seeing advertising.  In general, brands don’t address us on conscious, rational terms and, thus, if we’re wooed by them, our subsequent consumer behavior is not necessarily informed as such.  In that sense, I guess, it’s as much a Puritan critique of commercialism as it is, say, Marxist.  Media literacy like this would encourage consumers to think carefully and deeply about that which advertisers seek to self-efface and to (try to) be conscious and rational in the face of guerrilla endeavors that attempt to obfuscate and bypass those tendencies.  The cool sell is an enticing seduction.  But we can – and do – have the agency to be thoughtful about it.

Thanks very much for the opportunity to discuss the book!

Michael Serazio is an assistant professor in the Department of Communication whose research, writing, and teaching interests include popular culture, advertising, politics, and new media.  His first book, Your Ad Here: The Cool Sell of Guerrilla Marketing (NYU Press, 2013), investigates the integration of brands into pop culture content, social patterns, and digital platforms amidst a major transformation of the advertising and media industries.  He has work appearing or forthcoming in Critical Studies in Media CommunicationCommunication Culture & CritiqueTelevision & New Media, and The Journal of Popular Culture, among other scholarly journals.  He received his Ph.D. from the University of Pennsylvania's Annenberg School for Communication and also holds a B.A. in Communication from the University of San Francisco and a M.S. in Journalism from Columbia University.  A former staff writer for the Houston Press, his reporting was recognized as a finalist for the Livingston Awards and has written essays on media and culture for The AtlanticThe Wall Street JournalThe Nation, and Bloomberg View.  His webpage can be found at: http://sites.google.com/site/linkedatserazio

Guerrilla Marketing?: An Interview with Michael Serazio (Part One)

Transmedia, Hollywood 4: Spreading Change. Panel 1 - Revolutionary Advertising: Creating Cultural Movements from UCLA Film & TV on Vimeo.

From time to time, I have been showcasing, through this blog, the books which Karen Tongson and I have been publishing through our newly launched Postmillenial Pop series for New York University Pop. For example, Karen ran an interview last March with Lucy Mae San Pablo Burns, author of of the series’s first book, Puro Arte: Filipinos on the Stage of Empire. This week, I am featuring an exchange with Michael Serazio, the author of another book in the series, Your Ad Here: The Cool Sell of Guerrilla Marketing, and I have arranged to feature an interview with the other writers in the series across the semester.

We were lucky to be able to feature Serazio as one of the speakers on a panel at last April's Transmedia Hollywood 4: Spreading Change conference, see the video above, where he won people over with his soft-spoken yet decisive critiques of current branding and marketing practices. Your Ad Here achieves an admirable balance: it certainly raises very real concerns about the role which branding and marketing plays in contemporary neo-liberal capitalism, calling attention to the hidden forms of coercion often deployed in approaches which seem to be encouraging a more "empowered" or "participatory" model of spectatorship. Yet he also recognizes that the shifting paradigm amounts to more than a rhetorical smokescreen, and so he attempts to better understand the ways that brands are imagining their consumers at a transformative moment in the media landscape. His approach is deeply grounded in the insider discourses shaping Madison Avenue, yet he also can step outside of these self-representations to ask hard questions about what it means to be a consumer in this age of converged and grassroots media.  I was struck as we were readying this book for publication that it was ideally read alongside two other contemporary publications -- Sarah Banet-Weiser's Authentic TM: The Politics of Ambivalence in Brand Culture (see my interview with Banet-Weiser last spring) and our own Spreadable Media: Creating Meaning and Value in a Networked Culture (co-authored with Sam Ford and Joshua Green). Each of these books comes at a similar set of phenomenon -- nonconventional means of spreading and attracting attention to messages -- through somewhat different conceptual lens.

You will get a better sense of Serazio's unique contributions to this debate by reading the two-part interview which follows.

You discuss the range of different terminology the industry sometimes uses to describe these emerging practices, but end up settling on “Guerrilla Marketing.” Why is this the best term to describe the practices you are discussing?

 

Conceptually, I think “guerrilla” marketing best expresses the underlying philosophy of these diverse practices.  To be certain, I’m appropriating and broadening industry lingo here: If you talk to ad folks, they usually only think of guerrilla marketing as the kind of wacky outdoor stunts that I cover in chapter 3 of the book.  But if you look at the logic of branded content, word-of-mouth, and social media strategies, you see consistent patterns of self-effacement: the advertisement trying to blend into its non-commercial surroundings – TV shows and pop songs, interpersonal conversations and online social networks.  Advertising rhetoric has long doted upon militarized metaphors – right down to the fundamental unit of both sales and war: the campaign. 

But when I started reading through Che Guevara’s textbook on guerrilla warfare, I heard parallel echoes of how these emerging marketing tactics were being plotted and justified.  Guerrilla warfare evolved from conventional warfare by having unidentified combatants attack outside clearly demarcated battle zones.  Guerrilla marketing is an evolution from traditional advertising (billboards, 30-spots, Web banners, etc.) by strategizing subtle ad messages outside clearly circumscribed commercial contexts.  Guerrilla warfare forced us rethink the meaning of and rules for war; guerrilla marketing, I would argue, is doing the same for the ad world.

 

Let’s talk a bit more about the concept of “clutter” that surfaces often in discussions of these advertising practices. On the one hand, these new forms of marketing seek to “cut through the clutter” and grab the consumers attention in a highly media-saturated environment, and on the other, these practices may extend the clutter by tapping into previously unused times and spaces as the focal point for their branding effort. What do you see as the long-term consequences of this struggle over “clutter”?

 

Matthew McAllister had a great line from his mid-1990s book that tracked some of these same ad trends to that point: “Advertising is… geographically imperialistic, looking for new territories that it has not yet conquered.  When it finds such a territory, it fills it with ads – at least until this new place, like traditional media, has so many ads that it becomes cluttered and is no longer effective as an ad medium.”  I think this encapsulates what must be a great (albeit bitter) irony for advertisers: You feel like your work is art; it’s all your competitors’ junk that gets in the way as clutter. 

As to the long-term fate of the various new spaces hosting these promotional forms, I don’t have much faith that either media institutions or advertisers will show commercial restraint if there’s money to be made and eyeballs to be wooed.  I think eventually pop culture texts like music tracks and video games will be as saturated as film and TV when it comes to branded content; journalism, regrettably, seems to be leaning the same direction with the proliferation of “native advertising” sponsored content.  Facebook and Twitter have been trying to navigate this delicate balance of clutter – increasing revenues without annoying users – but here, too, it doesn’t look promising. 

Maybe if audiences wanted to pay for so much of that content and access which they’ve grown accustomed to getting for free, then maybe clutter is not the expected outcome here, but I’m not terribly sanguine on that front either.  The one guerrilla marketing tactic I don’t see over-cluttering its confines is word-of-mouth just because as a medium (i.e., conversation) that remains the “purest,” comparatively, and it’s hard to imagine how that (deliberate external) commercial saturation would look or play out.

 

There seems to be another ongoing tension in discussions of contemporary media between a logic of “personalization” and individualization on the one hand and a logic of “social” or “networked” media on the other. Where do you see the practices you document here as falling on that continuum? Do some of these practices seem more individualized, some more collective?

 

Really interesting question and here I’ll borrow Rob Walker’s line from Buying In on the “fundamental tension of modern life” (that consumer culture seeks to resolve): “We all want to feel like individuals.  We all want to feel like a part of something bigger than our selves.” 

The guerrilla marketing strategies that are showing up in social media probably best exemplify this paradox.  On one hand, brands want to give fans and audiences both the tools for original self-expression and simultaneously furnish the spaces for that networked socialization to take root.  On the other hand, all that clearly needs to be channeled through commercial contexts so as to achieve the “affective economics” that you identified in Convergence Culture

I look at something like the branded avatar creation of, say, MadMenYourself.com, SimpsonizeMe.com, or Office Max’s “Elf Yourself” online campaign as emblematic pursuits in this regard.  The “prosumer” can fashion her identity through the aesthetics of the brand-text (i.e., personalization) and then share it through their social networks (i.e., it’s assumed to be communally useful as well).  But, as I note in a forthcoming article in Television & New Media, these tools and avenues for expression and socialization are ultimately limited to revenue-oriented schemes – in other words, corporations are not furnishing these opportunities for self-discovery and sharing from an expansive set of possibilities.  They’re only allowed to exist if they help further the brand’s bottom line.

Michael Serazio is an assistant professor in the Department of Communication whose research, writing, and teaching interests include popular culture, advertising, politics, and new media.  His first book, Your Ad Here: The Cool Sell of Guerrilla Marketing (NYU Press, 2013), investigates the integration of brands into pop culture content, social patterns, and digital platforms amidst a major transformation of the advertising and media industries.  He has work appearing or forthcoming in Critical Studies in Media CommunicationCommunication Culture & CritiqueTelevision & New Media, and The Journal of Popular Culture, among other scholarly journals.  He received his Ph.D. from the University of Pennsylvania's Annenberg School for Communication and also holds a B.A. in Communication from the University of San Francisco and a M.S. in Journalism from Columbia University.  A former staff writer for the Houston Press, his reporting was recognized as a finalist for the Livingston Awards and has written essays on media and culture for The Atlantic, The Wall Street Journal, The Nation, and Bloomberg View.  His webpage can be found at: http://sites.google.com/site/linkedatserazio