"Hope Is An Active Verb": Brenda Laurel Revisits Computers as Theatre (Part One)

Brenda Laurel's Computers as Theatre was one of the few truly transformative books to emerge in the heady early days of the "digital revolution," demanding that we think of the computer as posing a series of creative problems that might best be address through the lens of the dramatic arts rather than purely technical problems that remain in the domain of the computer scientists. In a new edition released this month, she revisits that classic text in light of her rich and diverse experiences as a designer, educator, and entrepreneur. The resulting work looks backwards, at how far we have come towards transforming the computer into a new expressive medium and looks forwards to the technical and cultural problems we still need to resolve if we are going to produce a diverse and sustainable digital culture in the years ahead. I have been lucky enough to have had Laurel as a friend throughout my professional career and especially to be able to watch her journey with Interval Computing and Purple Moon games, where she broke new ground in seeking to broaden who played computer games, what kinds of experiences games offered, and what this new expressive media could accomplish. Justine Cassells and I documented some of her core insights in From Barbie to Mortal Kombat: Gender and Computer Games and we were with her shortly after Matel acquired and pulled the plug on the whole Secret Paths franchise. But, the story is perhaps best told through Brenda's own book, Utopian Entrepreneur, which I still turn towards when I seek inspiration about the value of doing interventions into the creative industries as a vehicle for promoting one's own personal and professional agendas. Laurel's insights predicted much that has happened in the games industry since, including the success of The Sims, which in many ways followed her template, the growth of transmedia entertainment which she helped to model, and the expansion of the female market around casual games.

Brenda Laurel has been and remains an important voice -- in many ways, the conscience of the digital industries -- and so it is with enormous pride that I share with you this exchange conducted online over the summer. Here, she reflects back on where we have been in digital theory and expression and speculates on some directions forward.

Reading back through this, I was struck by curious parallels between your work in Computers as Theatre and what Sherry Turkle was writing about in The Second Self around that same period. Both of you were trying to understand something of the mental models people brought with them to computers, even as you were asking questions that operated on different levels. What relationship do you see between these two key early works of digital theory?

Neither of us could have foreseen the firestorm of FPSs, social networks, and tiny interactions on tiny screens. In a way, I think that Sherry spoke a note of caution which I am trying to make actionable by suggesting that it’s not that these things exist, but to what use they are put (and how designers think about them) that can make them good for us or not (or somewhere in between). The relationship between the books may have been that we were each looking at the coming wave of technology as something fundamentally about humans, our social and developmental and cultural contexts.

Humans extrude technology. It is part of us. We are responsible for it. Each generation of the last 10 has had a new technology to deal with, to set norms about, to learn about appropriate usage. Parents and schools can help with media literacy—this would fit well into a Civics class, if we still had those.

As the topology of social networks complexifies, so do the opportunities and risks. I remember sitting with our girls in the age of television advertising and asking them, what are they trying to sell you? How are they trying to do it? Now they ask others the same questions as casual media critiques.

As I sat down to re-read this book, I was struck by the fact that I had no problem accepting the premise that what Aristotle had to say about drama might be valuable in thinking about what we do with computers (a theme upon which I gather you had some push back at the time the book was first published) but I had more difficulty wondering whether something written so early in the history of digital media would have anything to say to contemporary designers. It did, but the fact that this question surfaced for me leads me to ponder, what does this say about the nature of media change over the two decades since you first published this book?

It’s gratifying to me that many folks have worked on ideas in that first book and have made some progress, even recently. The largest excursions in the new edition are probably those about using science more robustly to model interaction. I’ve also emphasized the combined causal factors in multiplayer games and social media. Pointing back to your first question, I think that governance and civility are still essentially unsolved problems in this new world. I included Pavel Curtis and Lambda MOO in the new edition because there was such a valiant effort to figure out governance. I suspect that the lack of civility in multiplayer spaces today (especially in terms of sexual harassment) has something to do with the general lack of civility in our national character at this moment in time. But it also has to do with the designer’s role in framing and normalizing civil relations among multiple participants. There are great opportunities in this regard that might well channel back to our national discourse.

As I fan, I appreciated your rant about J. J. Abrams, Lost, and of course, Star Trek. What do you see as the limits of his “magic box” model for thinking about how to generate interests around stories? What alternatives do you think a more drama-centered approach offers?

As far as JJ says, his Magic Box has never been opened. That’s a problem for starters. If he wants to keep a virgin souvenir, great. But thoughtful plotting does not come out of thin air (or a closed box). Pleasing dramatic structures do not arise ad hoc. To the extent that character is a material cause of plot, the damage JJ has done to Spock and Uhura is unforgivable. It’s like throwing out some of the enduring stock characters in a Commedia piece. Spock stood for pure (if tortured) intellect; overtly sexualizing him was not a good thing for the Star Trek mythos. Transforming Uhura from a kick-butt, competent female officer into a romance queen (whose phasers don’t work as well as a man’s) fundamentally changed the ethos of the character as well as the mythos. That’s like saying that Oedipus held his temper at the crossroads and lived happily ever after with Mom.

A more drama-centric approach offers the pleasures of a well-structured plot, including catharsis. For enduring characters and ‘properties’ (e.g., The Odyssey), some core of dramatic tension already exists in the potential of the myth, and it can be spun out into many stories without exhausting its potential to deepen our relationships with the characters, their actions, and their universe.

Brenda Laurel has worked in interactive media since 1976 as a designer, researcher, writer and teacher in the domains of human-computer interaction and games. She currently serves as an adjunct professor in Computer Science Department at U. C. Santa Cruz. She served as professor and founding chair of the Graduate Program in Design at California College of Arts from 2006 to 2012 and the Graduate Media Design Program at Art Center College of Design in Pasadena (2001-2006) and was a Distinguished Engineer at Sun Microsystems Labs (2005-2006). Based on her research in gender and technology at Interval Research (1992-1996), she co-founded Purple Moon in 1996 to create interactive media for girls. The Art of Human-Computer Interface Design (1990), Computers as Theatre (1991), Utopian Entrepreneur (2001), Design Research: Methods and Perspectives (2004), and Computers as Theatre: Second Edition (2013).

Three Things that Western Media Fail to Tell You About Chinese Internet Censorship

This is another in a series of blog posts written by the students in my PhD seminar on Public Intellectuals, being taught this semester at USC's Annenberg School of Communication and Journalism. Strategic Censorship, Ambivalent Resistance, and Loyal Dissident: Three Things that Western Media Fail to Tell You About Chinese Internet Censorship

by Yue Yang

When talking about the Chinese Internet, what would first come to your mind?

The largest online gaming population in the world? A highly creative ICT (information and communication technology) community? An enormous e-commerce market? “Tu hao(土豪)”, “Watch and Observe (围观)”, “Er Huo (二货)”,”Jiong (囧)” ?

I don’t know about your answer, but I am sure most American media would say with alacrity “No, it is CENSORSHIP!” Indeed, “censorship” seems to have become their knee-jerk word to annotate the Chinese Internet. If you search “New York Times Chinese Internet” through Google, on the first page of search results, you would 9 out of 12 news stories related to censorship; for “CNN”, it is 9 out of 9 (with 3 urls linking to non-CNN websites), and for “Fox news”, it was 8 out 10.

Since American media is so interested in censorship on Chinese Internet, do they come up with good, objective censorship stories? As a native Chinese and a doctoral researcher studying the Chinese Internet in the US, I would say “yea” for “good storytelling” and “nah” for “objectivity”. Try to click on one of the top urls and you will see what I mean: this is an exotic digital world: on one hand, the iron-wristed Chinese government launches another round of censorship campaign. It cleanses criticism, cracks down dissident sites, and even puts political foes into jails. On the other hand, facing ruthless and stifling censorship, courageous and canny Chinese “netizens” (Internet citizens) use their ingenuity in various ways, to flit machine censorship and to mock the impotence of government. Be it a gloomy “Big Brother” story or an empowering “Tom-and-Jerry” story, a censorship story never lacks tension or a easy-to-follow storyline. However, these stories grounded only on partial facts are not qualified for universal validity they imply, and they are often too interested in drama to capture the plain truth. In short, current censorship stories in mainstream media are often too simplistic to inform western readers of the complex politics on the Chinese Internet. In the following part, I will talk about three things that western media do not tell their readers about Chinese Internet censorship.

(1) Strategic Censorship: yes, Chinese people criticize the government on the Internet!

The first thing that western media do not tell you about Chinese online censorship, is that average Chinese Internet users can and do express a lot of criticism about the party-government. In fact, such criticism attracts little interest from the government censorship.

It is a widely recognized observation by people who personally attend to political discussions on Chinese cyberspace, that online space of speech is expanding and people can criticize their government without seeing their unfavorable comments censored over time. This observation is contrary to what most media censorship stories are telling people, but recently it has been confirmed by a large-scale, big-data research report from a Harvard research team. By collecting, analyzing, and comparing the substantive content of millions of posts from nearly 1,400 social media services over all China, and distinguishing what gets censored from what remains online over time in discussions around 85 topics, the researchers have upended some popular stereotypes, and found that “negative, even vitriolic criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content”. Rather than remove any criticism against it, the Chinese government conducts strategic censorship, which “is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future”.

(2) What Chinese People Think about Censorship: infringement of rights or Moral Guidance?

The second thing that western media do not tell you about Chinese online censorship, is that Chinese people’s attitudes towards censorship are actually divided and ambivalent.

In 2009, the Chinese government made various censorship efforts to make it virtually prepared for an extremely sensitive time period: not long ago, the famous dissident and later-Nobel Peace Prize winner Liu Xiaobo released the “highly subversive” 08 Charter; starting from March, the government was to anticipate several major political anniversaries: the 50 anniversary for Tibetan uprising, the 20 anniversary for Tiananmen Event, and the 60 anniversary for the foundation of People’s Republic of China. Although nothing except the 60-year national anniversary was to be publicly celebrated, the government was highly vigilant against any online-and-off commemoration or mobilization of other political anniversaries.

In such context, there was little surprise that the Chinese government demanded pre-installed censorship software called “Green Dam Youth Escort(Lvba Huaji Huhang绿坝花季护航)” on each new PC to be sold in the market, including those imported from abroad. The purpose, of course, was to protect the psychological health of the young from pollution through pornography and violence. But Chinese Internet users soon found that the software expanded censorship to political information. Worse still, the software had so many technical defects that it would severely hurt overall online experience and security.

Shortly after the installation plan was announced, a large-scale online protest occurred among Chinese Internet users, particularly among the younger generation. Young people soon launched an online carnivalist play-protest, characterized by a manga-style personification of the software called the “Green Dam Girl” (Lvbaliang 绿坝娘). At the same time, “2009 Declaration of the Anonymous Netizens” (“The Declaration”), a western-style manifesto against censorship appeared online.

Seeing such resistance, Chinese government canceled the installation plan, and the “Green Dam incident” became a typical case to illustrate Chinese emerging civil power countering the government’s blunt censorship decisions. However, when examining the online comments on “The Declaration”, researchers discovered wide expressions disagreeing with the anti-censorship declaration. In fact, there was considerable endorsement of the government’s filtering attempt during the incident.

Why was there public support for censorship? After looking closely at these for-censorship comments, doing interviews with their authors, and analyzing the collected data with reference to Chinese culture, the researchers made some very interesting analysis: unlike western people who conceive government as a “necessary evil” and censorship serious infringement of freedom of speech, the majority of Chinese people uphold Confucian state-society ideal, represented by the notion “custodian government(父母官 fuwu guan)”, which accordingly frame people’s understanding of censorship.

So what does “custodian government” mean and imply? Basically, it is a Confucian notion that proposes a state-society model in which the government maintains its authority through displaying exemplary virtue and parental care for people, and in return, people respect and obey the government like they respect and obey their own parents. When both government and people perform their roles properly, social harmony and ideal that would yield the best for the most can be materialized. Note that traditional Chinese culture does not challenge hierarchy or centralization, nor does it often raise government legitimacy questions as long as the administration is established in accordance with Confucian ethics.

In the case of “Green Dam”, a large number of people supported government censorship, because they expected a morally exemplary and custodian government to establish social norms and protect as well as regulate minors. In other words, to many Chinese, censorship does not necessarily mean violation of human rights or encroachment of individual interests, rather, it means moral measurements that are expected and accredited.

Such understanding was more popular among middle-aged Internet users, but it was not rare among the young either. In fact, researchers have found that quite an impressive percentage of Chinese Internet users are either unaware of or do not care much about the online censorship, stating that they are generally happy with the current cyberspace they have. In short, the general attitudes towards censorship are not as definite as most western media state.

(3) Subversive Dissident or Loyal Dissident?

The third thing that western media do not tell you about Chinese online censorship, is that Chinese Internet users are more of “loyal dissident” than subversive resisters, even if they were expressing criticism. It was again in 2009, an Internet meme called the “Grass Mud Horse” (Caonima 草泥马) gained viral popularity in Chinese cyberspace. “Grass Mud Horse” sounds almost exactly like an abusive phrase, and it was originally invented by young Chinese gamers to dodge Internet censorship on obscene expressions. Soon the word play adopted the visual form of an alpaca, and put into different extension forms such as stories, animations, music videos, and T-shirts and dolls. Even a virtual Chinese character was later invented for it.

The phenomenal popularity of Grass Mud Horse attracted a lot of western media attention in its peak time. CNN, BBC, and the Guardian, for example, produce extensive report on it. Citing academics, these reports claim that Grass Mud Horse is not only a grassroots symbol of resistance against censorship, but also a “weapon of the weak” to challenge (the legitimacy of) the authoritarian government.

The statement that “Grass Mud Horse” is a play turned into politics, making creative resistance against censorship and authoritarianism is indeed interesting. However, when analyzing how Chinese Internet users actually engaged in the “Grass Mud Horse” carnival, how people actually used the words, pictures and related stories to expressed what intentions, research has found that Chinese Internet users tended to use “Grass Mud Horse” to vent personal frustration, criticize local corruption and bureaucracy, rather than make accusations against censorship or challenge the government’s legitimacy.

In a similar vein, through looking at the most popular and uncensored microblog tweets on Weibo that discussed political scandals during the Spring of 2012, some Swedish researchers have found that Chinese Internet users are more interested in criticizing certain activities of the Party than challenging its hold of power.

In fact, more and more scholars start to realize that consensus against the current regime in China is yet to be produced. More interestingly, despite pervasively expressed criticism of the government, in two highly respected surveys conducted by non-Chinese scholars (World Value Survey and Asian Barometer Survey), the rate of loyalty and recognition declared by the Chinese public to their government is much higher than those from western democratic societies. Instead of implying another uprising in China, these studies suggest that Chinese Internet users may become more critical and expressive, but they are not ready to demand fundamental democratization.

When creating Chinese Internet censorship stories, western media often fail with four things. First, it fails to look more closely at what is happening; second, it fails to avoid wishful speculations; third, it fails to account for complexity that disrupts clear storytelling; fourth, it fails to put incidents into the broad Chinese social and cultural context. With such failure, western media reduce the extremely interesting and complicated Chinese Internet to a monolith and create stereotypes.

I hope I have well explained some important aspects that go beyond the oversimplification of Chinese Internet censorship in western media, so that you, my dear readers, will not only have reservations next time you hear something about the Chinese Internet, but also suspend belief whenever you receive messages about a different society from the media. Bolstering critical thinking and avoiding stereotyping, that’s what media literacy is working at, and that is also what I am trying to do with this blog post.

Yue Yang is a PhD student at Annenberg School for Communication, USC. Being a native Chinese, she is constantly confused and therefore deeply fascinated by the complexity of her country's culture and society, online and off. Her current interests range from Chinese people's imagination of the West, to the tensional dance between the Chinese government, the grassroots and the intellectuals on the cyber arena (and she always hopes that one day she could write as fast as she eats and publish as much as she speaks.).

How Many People Does It Take to Redesign a Light Bulb?: USC's Daren Brabham on Crowdsourcing (Part One)

This week, I want to use my blog to welcome a new colleague to the Annenberg School of Communication and Journalism here at USC. I was lucky enough to have met Daren Brabham when he was still a graduate student at the University of Utah. Brabham had quickly emerged as one of the country's leading experts about crowdsourcing as an emerging practice impacting a range of different industries. The video above shows Brabham discussing this important topic while he was an Assistant Professor at the University of North Carolina. But, this fall, USC was lucky enough to lure him away, and I am personally very much looking forward to working with him as he brings his innovative perspectives on strategic communications to our program.

Brabham's insights are captured in a concise, engaging, and accessible book published earlier this fall as part of the MIT Press's Essential Knowledge series simply titled Crowdsourcing. Brabham is attentive to the highly visible commercial applications of these practices but also the ways that these ideas are being incorporated into civic and journalistic enterprises to change how citizens interface with those institutions that most directly effect their lives. He also differentiates crowdsourcing from a range of other models which depend on collective intelligence, crowd-funding, or other mechanisms of participatory culture. And he was nice enough to explore some of these same issues through an interview for this blog.

The term, “Crowdsourcing,” has been applied so broadly that it becomes harder and harder to determine precisely what it means. How do you define it in the book and what would be some contemporary examples of crowd-sourcing at work?

There is certainly a bit of controversy about what counts as crowdsourcing. I think it is important to provide some structure to the term, because if everything is crowdsourcing, then research and theory development rests on shaky conceptual foundations. One of the principal aims of my book is to clarify what counts as crowdsourcing and offer a typology for understanding the kinds of problems crowdsourcing can solve for an organization.

I define crowdsourcing as an online, distributed problem solving and production model that leverages the collective intelligence of online communities to serve an organization’s needs. Importantly, crowdsourcing is a deliberate blend of bottom-up, open, creative process with top-down organizational goals. It is this meeting in the middle of online communities and organizations to create something together that distinguishes crowdsourcing from other phenomena. The locus of control resides between the community and the organization in crowdsourcing.

One of the great examples of crowdsourcing is Threadless, which falls into what I call the peer-vetted creative production (PVCP) type of crowdsourcing. At Threadless, the company has an ongoing call for t-shirt designs. The online community at Threadless, using an Illustrator or Photoshop template provided by the company, submits silk-screened t-shirt designs to the website. The designs are posted in the community gallery, where other members of the online community can comment or vote on those designs. The highest rated designs are then printed and sold back to the community through the Threadless site, with the winning designers receiving a modest cash reward. This is crowdsourcing – specifically the PVCP type of crowdsourcing – because the online community is both submitting original creative content and vetting the work of peers, offering Threadless not only an engine for creation but also fine-tuned marketing research insights on future products.

Threadless is different from, say, the DEWmocracy campaign, where Mountain Dew asked Internet users to vote on one of three new flavors. This is just simple marketing research; there is no real creative input being offered by the online community. DEWmocracy was all top-down. On the other end of the spectrum is Wikipedia and many open source software projects. In these arrangements, the organization provides a space within which users can create, but the organization is not really directing the day-to-day production of that content. It is all highly structured, but the structure comes from the grassroots; it is all bottom-up. Where organizations meet these communities in the middle, steering their creative insights in strategic directions, is where crowdsourcing happens.

Some have questioned the use of the concept of the “crowd” in “crowdsourcing,” since the word, historically, has come linked to notions of the “mob” or “the masses.” What are the implications of using “crowd” as opposed to “community” or “public” or “collaborative”?

I am not sure that crowdsourcing is really the best term for what is happening in these situations, but it is the term Jeff Howe and Mark Robinson came up with for Howe’s June 2006 article in Wired, which was where the term was coined. It is no doubt a catchy, memorable term, and it rightly invokes outsourcing (and all the baggage that goes with outsourcing). The “crowd” part may be a bit misleading, though. I have strayed away from referring to the crowd as “the crowd” and have moved more toward calling these groups “online communities,” which helps to anchor the concept in much more established literature on online communities (as opposed to literature on swarms, flash mobs, and the like).

The problem with “crowd” is that it conjures that chaotic “mob” image. These communities are not really masses. They tend to be groups of experts or hobbyists on a topic related to a given crowdsourcing application who self-select into the communities – graphic designers at Threadless, professional scientists at InnoCentive, and so on. They are not “amateurs” as they are often called in the popular press. Most of the truly active members of these online communities – no surprise – are more like invested citizens in a community than folks who were accidentally swept up in a big rush to join a crowdsourcing site.

The professional identities of these online community members raise some critical issues regarding labor. The “sourcing” part of “crowdsourcing” brings the issue of “outsourcing” to the fore, with all of outsourcing’s potential for exploitation abroad and its potential to threaten established professions. No doubt, some companies embark on crowdsourcing ventures with outsourcing in mind, bent on getting unwitting users to do high-dollar work on the cheap. These companies give crowdsourcing a bad name. Online communities are wise to this, especially the creative and artistic ones, and there are some notable movements afoot, for example, to educate young graphic designers to watch out for work “on spec” or “speculative work,” which are the kinds of exploitive arrangements many of these crowdsourcing ventures seek.

It is important to note that online communities are motivated to participate in crowdsourcing for a variety of reasons. Many crowdsourcing arrangements can generate income for participants, and there are folks who are definitely motivated by the opportunity to make some extra money. Still others participate because they are hoping through their participation to build a portfolio of work to secure future employment; Kathleen Kuehn and Thomas F. Corrigan cleverly call this kind of thing “hope labor.” Still others participate because they enjoy solving difficult problems or they make friends with others on the website. As long as organizations understand and respect these different motivations through policies, community design, community recognition, or compensation, online communities will persist. People voluntarily participate in crowdsourcing, and they are free to leave a site if they are unhappy or feel exploited, so in spite of my Marxian training I often find it difficult to label crowdsourcing “exploitive” outright.

Daren C. Brabham is an assistant professor in the Annenberg School for Communication & Journalism at the University of Southern California. He is the author of the book Crowdsourcing (MIT Press, 2013) and has published widely on issues of crowdsourcing in governance and the motvations of online communities. His website is www.darenbrabham.com.

Projecting Tomorrow: An Interview with James Chapman and Nicholas J. Cull (Part Three)

 

Henry: War of the Worlds is an interesting case study of the ways that the Cold War impacted science fiction, especially because we can draw clear comparisons to what the story meant at the time Wells wrote it and about the ways Steven Spielberg re-imagined it in the wake of 9/11. So, what do these comparisons suggest about the specificity of the discourse on alien invasion in 1950s America?

James: Wells's novel is an invasion narrative with allegorical overtones - it shows a complacent imperial superpower what it might feel like to be on the receiving end of violent colonization by a technologically superior enemy. It's a story that has been mobilised at times of geopolitical tension: Orson Welles's (in)famous radio broadcast of 1938 came immediately after the Munich Agreement, the 1953 film was made at the height of the Cold War, and, as you say, the 2005 Spielberg film reconfigured the basic story in the context of the War on Terror.

We use the 1953 film, produced by George Pal, as the focus of our case study. This is a case where my understanding of the film was really enhanced by doing the archival research. The archive reveals two particular points of interest. The first is the extent to which the film emphasized Christianity. Now, Wells was an atheist, and the book includes a very unsympathetic charactrization of a Church of England cleric who is both deranged and a coward. In the film, however, Pastor Collins becomes a heroic character, who dies while trying to make peace with the invaders, while the resolution - in which the Martians are eventually destroyed by the common cold bug - is specifically attributed to Divine intervention.

The various treatments and scripts in the Paramount archives show how this element was built up in successive drafts. This is consistent with American Cold War propaganda, which equated the United States with Christianity in opposition to the godless Communists. So, this aspect of the production locates the film of War of the Worlds in the context of US Cold War propaganda, and might prompt us to compare it to other 1950s alien-invasion films such as Invaders from Mars or The Thing.

However, the other point which came out from the archival research, was that the Pentagon, which liaised with Hollywood in providing stock footage and military personnel, refused to co-operate with this particular film. The reason they advanced was that the film showed the US military as unable to repel an alien (for which read Communist) invasion. In the film even the atom bomb is ineffective against the Martians. The Pentagon wasn't happy about this aspect of the film and refused to co-operate. Instead Paramount had to turn to the Arizona National Guard! So, in this regard, the film is not quite the 'official' Cold War propaganda that I had thought - and it was only researching the production history that revealed this aspecy of the film.

 

Henry: Stanley Kubrick is currently being celebrated by an exhibition at the LACMA and he remains a figure who has enormous cultural prestige even now, yet in the case of several of his films, including 2001: A Space Odyssey, A Clockwork Orange, and A.I. (which was made by Spielberg after his death), he worked in SF which has struggled for cultural legitimacy. How might we understand the status attached to these films, given the tendency of critics to otherwise dismiss SF films as brainless entertainment?

 



James: Again this is an example of how the archive illuminates our understanding of the films. The origin of 2001 was Kubrick's desire to make "the proverbially 'really good' science fiction movie" - to which end he invited Arthur C. Clarke to collaborate on the project. Having Clarke on board attached a degree of cultural prestige - like H.G. Wells before he was a well-known author, but also one whose work had a strong scientific basis (the 'science' aspect of science fiction, if you like). It was another case of a problematic relationship between a film-maker and an SF author, as they ended up with rather different ambitions for the film. But I don't think that Kubrick was all that bothered about the low cultural status attached to science fiction. For Kubrick 2001 was really an exploration of existential themes that just happened to be an SF movie. Incidentally, it was while doing the research for 2001, during the course of which he read hundreds of books and articles about science, technology and space travel, that Kubrick came across the article that prompted his interest in 'A.I.' - or Artificial Intelligence.

Henry: You provide some rich insights into the ways that Civil Rights era discourses shaped the making of the Planet of the Apes film series. To what degree do you see the recent remakes of these films retaining or moving away from these themes as they try to make these stories relevant for contemporary viewers?

James: This is a case of how SF responds to the social and political contexts in which it is produced. The first Planet of the Apes in 1968 was quite explicitly about the Civil Rights movement and the relationships between different ethnic groups, if you like, which draws a clear parallel between race and socio-economic status. And the later films in the series, especially Conquest of the Planet of the Apes, make this theme even more explicit. But race doesn't seem quite such an important theme in the more recent films. That's not to say that the issue is no longer important, but rather that the film-makers are now responding to a different set of concerns. I enjoyed Rise of the Planet of the Apes - it's a sort of 'alternative history' of the Apes films - though I didn't feel that it had quite the same polemical edge as the original film series between 1968 and 1973.

Nick: My sense was that the 2011 reboot Rise of the Planet of the Apes was hitting slightly different marks, especially issues around the ethics of bioengineering, and a warning against exploitation whether on class or race lines is still apposite. The Tim Burton take in 2001 seemed rather more in the line of a tribute than a piece with something to say about its own times except ‘we’re running low on ideas.’

Henry: You have no doubt seen the announcement of plans to begin production on a new set of Star Wars films, now that George Lucas is handing over the control of his empire to a new generation of filmmakers. Your analysis of Star Wars explores the ways that Lucas built this saga as much on borrowings of other films and on the core structures of myths and fairy stories rather than on any speculation about real world concerns. He would have described this project as one designed to create “timeless” entertainment. To what degree do you see Star Wars as of its time and to what degree does returning to the franchise now require some fundamental rethinking of its core premises?

Nick: The initial success of Star Wars was absolutely of its time – America was tired of cynicism, Vietnam, Watergate and so forth and looking to escape back to innocence. Lucas gave them their cinematic past in pastiche form and a moral and redemptive message. While I think Lucas intended his own revisiting of the saga in the prequel trilogy to make new points about the vulnerability of democracy and a noble individual to corruption, the new films were really more about Star Wars than anything else. Their performance was not tied to their suitability for the moment in which they appeared but rather the quality (or otherwise) of the effects and story. I think the saga is a powerful enough property to generate into own bubble of relevance which is a kind of timelessness at least as long as people remember enjoying the films. Star Wars has created its own reality and obscured its own source material. Storm Trooper means Star Wars not Nazi Germany to most Americans under fifty.

James: I'd suggest that most, if not all, film genres eventually become self-referential. The main points of reference for the original Star Wars were other movies - as Nick's chapter so brilliantly brings out. For the prequel films the points of reference were not so much other movies as previous Star Wars movies - they feed upon our own memories of Star Wars.

Henry: You describe Lucas as struggling consciously with the racial politics of the adventure genre titles that inform his project, making a series of compromises across the development of the original film in terms of its treatment of race and gender. How do these behind-the-scenes stories help us to understand the ongoing controversy around how Star Wars deals with race and gender?

Nick: I was startled by the extent to which Lucas initially saw Star Wars as a way to get progressive ideas about diversity before an audience. He toyed with the idea of an all Japanese cast, a black Han Solo and a Eurasian Princess Leia (which would have made his later twin sub plot a harder sell) but backed away from these ideas as production got underway. He said he couldn’t make Star Wars and Guess Who’s Coming to Dinner at the same time. His aliens became a device through which he could still have ‘fun’ with difference and notions of the exotic or the savage without worrying about disgruntled Sand People or Wookies picketing Mann’s Chinese Theatre on opening night. I think it is correct to ask questions about the racial politics of Star Wars not so much to question whether George Lucas is a bigot (which I do not think he is) but rather to use Star Wars as a mirror to a society that plainly has mixed feelings about diversity and female empowerment.

Henry: Robocop is another of your case study films which is undergoing a remake at the current time. You link the original to debates around big business and the current state of urban America under the Reagan administration. What aspects of this story do you think remains relevant in the era of Occupy Wall Street and the Tea Party?

Nick: I certainly do see RoboCop as one of the great movies editorializing on business in the 1980s – right up there with Wall Street. I’ll be fascinated to see how the new RoboCop tackles these subjects. Certainly corporate ethics and privatization remain live issues. It was always interesting to me that RoboCop still needed to imagine that the #1 guy at the corporation was good. I wonder if that will still be the case. Of course RoboCop is an anti-corporate allegory told by a corporation, so they will probably fudge the issue and not have Murphy marching into Congress and demanding the reinstatement of the Glass Stiegel Act or restraints on Wall Street.

Henry: You end the book with a comparison between Science Fiction Cinema and television. So, what do you see as the most important differences in the ways that the genre has fared on the small screen? If you were writing this book on science fiction television, which programs would yield the richest analysis and why?

Nick: There is a symbiotic relationship between SF film and TV. A number of the films we look at can be seen as outgrowths of TV – Quatermass is the most obvious; some use TV expertise – like 2001: A Space Odyssey; some have leant their technology to TV; many have TV spin-offs or imitators – Logan’s Run and Planet of the Apes are cases in point. I think TV tends by its nature to bring everything home, turning everything into a cyclical family drama, whereas film tends to stretch everything to the horizon and emphasize linearity and personal transformation. Both approaches have strengths and weaknesses for SF subjects. I think that there is an intimacy of engagement possible for the audience of a television show which is much harder to create with a one-off film.

As you’ve documented, Henry, at its best television becomes truly imbedded in people’s lives. This is the power of Star Trek or Doctor Who. James and I have both written about Doctor Who elsewhere and there is more to be said. I’ve written a little about the television programs of Gerry Anderson, Thunderbirds and so forth, which have been underserved in the literature thus far. I am fascinated by the imagined future in Anderson’s output, with global governance and institutions: post war optimism traced to the horizon.

James: It's a fascinating question - and one where technological change is important. I'd suggest that in the early days of TV - when most drama was produced live in the studio - TV had the edge over film because the technological limitations meant that it had to focus on ideas and characterization. Hence The Quatermass Experiment and its sequels, arguably, work better on TV than in their cinema remakes. There's also a symbiosis between the form of SF literature and early TV.

Until the mid-twentieth century much of the best SF literature was in the form of short stories rather than novels - this transferred very easily to SF anthology series such as The Twilight Zone and The Outer Limits. That's not a form of TV drama we have today. Since c.2000, however, there's been a vast technological and aesthetic change in the style of TV science fiction. One of the consequences of digital technology in both the film and TV industries has been to blur the distinction between the two media. A lot of TV today looks like film - and vice versa. Certainly TV science fiction has become more 'cinematic' - look at the revisioning of Battlestar Galactica or the new Doctor Who. The visual effects are as good as cinema, while the TV series have adopted the strategy of 'story arcs' that lends them an epic dimension - like the longer stories you can tell in film.

Nick mentions that we've both written, independently, on Doctor Who, and there's certainly more to be said there - and with its spin-offs Torchwood and The Sarah Jane Adventures. It works both as a narrative of British power and as an exploration of Anglo-American relations - themes we cover in the SF Cinema book. I don't know whether we'll go on to write a companion volume on US and UK television science fiction, but if we do there's plenty of scope. The Twilight Zone is a key text, certainly, not least because it employed a number of SF authors to write scripts. The Invaders is an interesting riff on the invasion narrative, a 1950s Cold War paranoia text but made in the 1960s. V is a cult classic - paranoia reconfigured for the 1980s.

In Britain series such as Survivors and Blake's 7 demonstrate again a very dystopian vision of the future. There were also faithful, authentic adaptations of SF literature like The Day of the Triffids and The Invisible Man in the 1980s. Back in the US, series like The Six Million Dollar Man, The Bionic Woman and The Incredible Hulk clearly have things to say about the relationship between science and humanity. I've already mentioned Battlestar Galactica but there are plenty of other examples too: Space: Above and Beyond, Farscape, Firefly, the various Star Trek spin offs. That's the beauty of science fiction - the universe is infinite!

For those who would like to read what Chapman and Cull have had to say about Doctor Who, Here you go:

Nick Cull, 'Bigger on the Inside: Doctor Who as British cultural history.' For Graham Roberts and Philip M. Taylor (eds.), The Historian, Television and Television History (University of Luton Press, 2001), pp. 95-111

Nick Cull. ‘Tardis at the OK Coral,’ in John R. Cook and Peter Wright (eds), British Science Fiction Television: A Hitchhiker’s Guide, (London, I. B. Tauris, 2006), pp. 52-70

Chapman's WhoWatching blog: http://whowatching.wordpress.com/2013/05/21/review-the-name-of-the-doctor/

 

Nick Cull is professor of communication at University of Southern California.  He is a historian whose research focuses on the interface between politics and the mass media.  In addition to well-known books on the history of propaganda he has published widely on popular cinema and television including shorter pieces on Doctor Who, Gerry Anderson and The Exorcist.

James Chapman is professor of film at University of Leicester in the UK.  He is a historian who has specialized in popular film and television.  His work has included book length studies of James Bond, Doctor Who, British Adventure Serials, British Comic Books and British propaganda in the Second World War.  His previous collaboration with Nick Cull was a book on Imperialism in US and British popular cinema.

Projecting Tomorrow: An Interview with James Chapman and Nicholas J. Cull (Part Two)

Henry: As you suggest in your introduction, “futuristic narratives and images of SF cinema are determined by the circumstances of their production.” What relationship do you posit between the ebb and flow of visibility for science fiction films and the evolution of the American and British film industries?

Nick: When we wrote that line we were thinking mainly about the way in which the historical circumstance can be channeled into SF, which is so wonderfully open to addressing contemporaneous issues by allegory (or hyperbole), but I think it can be applied to the film industries or ‘industrial context’ if you will. Cinema is a business and there are clear business cycles at work. While we found that the reputation of SF as a high risk genre which seldom delivered on its promise to producers was exaggerated – we ran into more examples of bland returns than out-and-out ruination – it does seem to have inhibited production somewhat. Production runs in cycles as if producers on both sides feel sure that SF will pay off, take a risk with a high budget film, fail to realize expectations and then back off in disappointment for a couple of seasons.

2001 breaks the cycle and ushers in a SF boom which has yet to end. The successes are so spectacular that they carry the genre over the bumps. The boom made it economically feasible to develop dedicated technologies to create even better films – the story of Industrial Light and Magic is a case in point – and these technologies seem to have been best displayed in a genre which allows or even requires images of the fantastic.

I think SF has now evolved into the quintessential film genre which sells itself based on taking images to new places. There are industrial reasons reinforcing this trend, not the least being that if you make your money from exhibiting something on the big screen you need to seek out stories that are actually enhanced by that treatment. Not every genre lends itself. I doubt there will ever be a British social realist film or the sort done by Ken Loach or Mike Leigh shot in IMAX, though insights from that approach can take SF to new places, witness Attack the Block.

James: The market is also relevant here. Take Things to Come: one of the top ten films at the UK box office in 1936, but the British market alone was insufficient to recoup the costs of production and the film didn't do much business in the United States. Another theme that crops up several times, is that, while Britain no longer has a large film production industry, it does have excellent studio and technical facilities. Hence big Hollywood-backed films like 2001 and Star Wars were shot in British studios with largely British crews. And there are other examples - Alien, Judge Dredd - that we didn't have space to include.

 



Henry: A central emphasis here is in the ways that science fiction responds to popular debates around political and technological change. It’s a cliché that Hollywood had little interest in delivering “messages,” yet historically, science fiction was a genre which sought to explore “ideas,” especially concerns about the future. How did these two impulses work themselves out through the production process? Do you see science fiction cinema as the triumph of entertainment over speculation or do most of the films you discuss make conscious statements around the core themes which SF has explored?

Nick: As I said when thinking about the literary/cinematic transition, I think that messages and ideas can have a hard time in Hollywood and often find themselves being forced out by images. This said, the messages that survive the process are all the more potent. Avatar may have been all about what James Cameron can do with digital 3-D it made important points about indigenous rights and the environment along the way.

James: There've been some honourable and well-intentioned attempts to build SF films around ideas or messages - Things to Come stands out - though I think that in general, and this is true of popular cinema as a whole and not just SF, audiences tend to be turned off by overt political or social messages and prefer their ideas served up within a framework of entertainment and spectacle. Nick's chapter on Star Wars, to take just one example, shows how this film was able to address a range of contemporary issues within a framework of myth and archetypes that resonated with audiences at the time and since. Here, as elsewhere, 2001 is the watershed film - perhaps the only ideas-driven SF film that was also a huge popular success.

Henry: You devote a chapter to the little known 1930s film, Just Imagine, and among other things, note that it is not altogether clear how much Hollywood or audiences understood this to be a science fiction film given its strong ties to the musical comedy as a genre. What signs do we have about the role which these genre expectations played in shaping the production and reception of Just Imagine?

 

Nick: Neither the producers nor audience of Just Imagine had much idea what was going on generically. First of all the production team were a re-assembly of the group who had worked on the studio’s boy-meets-girl hit Sunny Side Up and all their credentials were in musical comedy; secondly the critics who saw the film had trouble finding terminology to describe the film. They tended towards terms like ‘Fantasy’ and drew parallels with The Thief of Baghdad rather than Metropolis. Finally there was the question of law suits as sundry writers claimed that elements we now think of as common points of the genre such as space flight to Mars were original to them. Courts were unimpressed.

Henry: Things to Come is one of those rare cases where a literary SF writer -- in this case, H.G. Wells -- played an active role in shaping the production of a science fiction film. What can you tell us about the nature of this collaboration and was it seen as a success by the parties involved?

James: It's a fascinating, and complex, story. This one film exemplifies perfectly the tension between ideas and spectacle that runs throughout the history of SF cinema. Wells was contracted by Alexander Korda, Britain's most flamboyant film producer, and the closest that the British industry ever had to one of the Hollywood 'movie moguls', to develop a screenplay from his book The Shape of Things to Come. Wells was interested because, unlike many writers, he believed in the potential of cinema as a medium for exploring ideas and presenting his views to a wider public.

From Korda's perspective, Wells was a 'name' whose involvement attached a degree of intellectual prestige to the film. But there were two problems. The first was that Wells tried to exercise control over all aspects of the production, even to the extent of dictating memos on what the costumes should look like - which Korda was not prepared to allow. The second problem was that The Shape of Things to Come - an imaginative 'history of the future' - is not a very cinematic book: no central characters, for example, or big set pieces. So a new story had to be fashioned.

Some aspects of Wells's vision were lost in the process. For example, the book is critical of organised religion, but the British Board of Film Censors frowned upon any criticism of the Church as an institution - so that theme goes by the wayside. And Wells's book posits the notion that a well-intentioned technocratic dictatorship - he calls it the 'Puritan Tyranny' - would be beneficial for solving the problems of the world. Again this is significantly downplayed in the film.

So there were a lot of compromises. The collaboration is perhaps best described as one of creative tensions. Publically Wells spoke warmly of Korda and his collaboration with director William Cameron Menzies (an American, incidentally, referring back to our previous discussion of Anglo-American contexts). But privately he was profoundly disappointed by the finished film and was scathing about Menzies, whom he described as "incompetent". In the end, Things to Come is one of those cases where the finished film reveals traces of the problematic production. For Wells it was about the ideas, for Korda it was about the spectacle - but the two are not really reconciled into a wholly satisfying experience.

 

 

Nick Cull is professor of communication at University of Southern California.  He is a historian whose research focuses on the interface between politics and the mass media.  In addition to well-known books on the history of propaganda he has published widely on popular cinema and television including shorter pieces on Doctor Who, Gerry Anderson and The Exorcist.

James Chapman is professor of film at University of Leicester in the UK.  He is a historian who has specialized in popular film and television.  His work has included book length studies of James Bond, Doctor Who, British Adventure Serials, British Comic Books and British propaganda in the Second World War.  His previous collaboration with Nick Cull was a book on Imperialism in US and British popular cinema.

Projecting Tomorrow: An Interview with James Chapman and Nicholas J. Cull (Part One)

  The recently published Projecting Tomorrow: Science Fiction and Popular Film offers vivid and thoughtful case studies that consider the production and reception of key British and American science fiction movies, including Just Imagine (1930), Things to Come (1936), The War of the Worlds (1953), The Quatermass Experiment and its sequels (1955), Forbidden Planet (1956), 2001: A Space Odyssey (1968), Planet of the Apes (1968), The Hellstrom Chronicles (1971), Logan's Run (1976), Star Wars (1977), RoboCop (1987), and Avatar (2009). I very much enjoyed the background that Chapman and Cull provided on these films. Even though I was familiar with each of these films already, I managed to learn something new in every chapter. The authors did a masterful job  in the selection of examples -- a mix of the essential and the surprising -- which nevertheless manage to cover many of the key periods in the genre's evolution on the screen.  They make a strong case for why SF films need to be considered in their own right and not simply as an extension of the literary version of the genre.  Chapman and Cull are long-time SF fans, but they also bring the skills of an archival historian and expertise in global politics to bear on these rich case studies.   All told, I suspect this book is going to be well received by fans and academics alike.

I have gotten to know Cull, who is a colleague of mine here at the Annenberg School of Communications and Journalism, through hallway and breakroom conversations about our mutual interests in Doctor Who and a range of other cult media properties, and I was delighted to have some interplay with Chapman when he visited USC a year or so back. I am therefore happy this week to be able to share with you an interview with the two authors who hits at some of the key themes running through Projecting Tomorrow.

 

Henry: Let me ask you a question you pose early in your introduction: “Why has SF literature been so poorly served by the cinema?” Perhaps, we can broaden out from that and ask what you see as the relationship between science fiction literature and film. Why do the differences in the media and their audiences result in differences in emphasis and focus?

Nick: This is an excellent question. My sense is that SF literature has tended to serve divergent objectives to SF film. I am taken by the British novelist/critic Kingsley Amis’s observation fifty years ago that the idea is the hero in literary science fiction. My corollary to that is that the image is the hero in SF cinema. Cinema by its nature emphasizes image over ideas and all the more so as the technology to generate ever more spectacular images has increased.

James: I think there's also a sense in which SF literature has always been a slightly niche interest - popular with its readership, yes, but generally not best-seller levels of popular. SF cinema, in contrast, is now a mainstream genre that has to serve the needs of the general cinema-going audience as well as genre fans. Hence the charge from SF readers that cinema by and large doesn't do SF very well - that the need to attract a broad audience (because of the expense of the films) leads to a diluting of the 'ideas' aspect of SF in literature. One of the themes we track in the book is the process through which SF went from being a marginal genre in cinema to becoming, from the 1970s, a major production trend.

 



Henry: What criteria led to the selection of the case studies you focus upon in Projecting Tomorrow?

Nick: We chose films that could both represent the SF cinema tradition on both sides of the Atlantic and illuminate a range of historical issues. We needed films that had a good supply of archive material to which we could apply our historical research methods, and all the better if that material had hitherto escaped scholarly analysis. We wanted the milestones to be present but also some surprise entries too. There were some hard choices. We doubted there was anything really new to say about Blade Runner so that proposed chapter was dropped. I was keen to write about Paul Verhoeven’s Starship Troopers but was unable to locate sufficient archive material for a historical approach. It was during that search that I found the treasure trove of material from Verhoeven’s RoboCop and decided to write about that instead. One of the Star Trek films and Jurassic Park were also late casualties from the proposal. There are some surprise inclusions too. We both find the combination of genres a fascinating phenomenon and hence included The Hellstrom Chronicle, which grafts elements of SF onto the documentary genre and managed to spawn a couple of SF projects in the process.

James: The selection of case studies was a real problem for this book, as SF is such a broad genre in style and treatment, and there are so many different kinds of stories. We wanted to have broad chronological coverage: the 'oldest' film is from 1930 (Just Imagine) and the most recent is 2009 (Avatar). It would have been possible to write a dozen case studies focusing on a single decade - the 1950s, for example, or the 1970s, both very rich periods for SF cinema - but we felt this would have been less ambitious and would not have enabled us to show how the genre, and its thematic concerns, have changed and evolved over time. Beyond that, Nick and I are both historians by training, and we wanted examples where there was an interesting story behind the film to tell. Logan's Run, for example, is a case where the production history is in certain ways more interesting than the finished film: George Pal had wanted to make it in the late 1960s as a sort of 'James Bond in Tomorrowland' but for various reasons it didn't happen then, and when it was finally made, in the mid 1970s, the treatment was more serious (and perhaps portentous). Some films selected themselves: we could not NOT have milestones like Things to Come and 2001: A Space Odyssey - and in the latter case the Stanley Kubrick Archive had recently been opened to researchers and so there were new primary sources available. I wanted to include Dark Star, a sort of spoof response to 2001, but there wasn't much in the way of archive sources and the background to the film is quite well known - and in any event we already had plenty of other case studies from the 1970s. In the end, although we had to leave out some important films, like Invasion of the Body Snatchers (I'd simply refer readers to Barry Keith Grant's excellent study of this film in the British Film Institute's 'Film Classics' series), this meant we could find space for some forgotten films, such as Just Imagine, and for some that are probably less familiar to US audiences, such as The Quatermass Experiment.

Henry: You have made a conscious choice here to include British as well as American exemplars of science fiction. How would you characterize the relationship between the two? In what ways do they intercept each other? How are the two traditions different?

Nick: British and American SF and culture more widely are thoroughly intertwined. The sad truth is that US corporate culture tends to homogenize so I think it helps to have the UK bubbling along across the pond as a kind of parallel universe in which different responses can emerge and save the creative gene pool from in-breeding. SF cinema has seen some great examples of this Anglo-America cross fertilization process. 2001: A Space Odyssey is a terrific example of that. If I had to essentialize the difference between the two approaches, I’d say that Britain is a little more backward looking (anticipating Steam Punk) and the US has been more comfortable with a benign military presence. Today the two traditions have become so interlinked that it is very difficult to disengage them, but they seem to be good for each other.

James: The Anglo-American relationship was also something we'd explored in our first book together, Projecting Empire, where we found there were strong parallels in the representation of imperialism in Hollywood and British cinema. In that book we have two case studies by Nick, on Gunga Din and The Man Who Would Be King, showing how a British author, Rudyard Kipling, met the ideological needs of American film-makers. The equivalent of Kipling for science fiction is H.G. Wells, a British author widely adapted, including by Hollywood - and again we have two case studies of Wellesian films. If I were to generalize about the different traditions of US and UK science fiction - and this is a gross over-simplification, as there are numerous exceptions - it would be that by and large American SF movies have held to a generally optimistic view of the future whereas British SF, certainly since the Second World War, has been more pessimistic. This might reflect the contrasting fortunes of the two nations since the mid-twentieth century - American films expressing the optimism and confidence of the newly emergent superpower, British films coming to terms with the slow decline of a former imperial power. But, as I said, this is an over-simplification. Planet of the Apes, for example, has a very dystopian ending (though later films in the series are more optimistic in suggesting the possibility of peaceful future co-existence), whereas Doctor Who (albeit from television) is an example of British SF with a generally positive outlook on the future.

Nick Cull is professor of communication at University of Southern California.  He is a historian whose research focuses on the interface between politics and the mass media.  In addition to well-known books on the history of propaganda he has published widely on popular cinema and television including shorter pieces on Doctor Who, Gerry Anderson and The Exorcist.

James Chapman is professor of film at University of Leicester in the UK.  He is a historian who has specialized in popular film and television.  His work has included book length studies of James Bond, Doctor Who, British Adventure Serials, British Comic Books and British propaganda in the Second World War.  His previous collaboration with Nick Cull was a book on Imperialism in US and British popular cinema.

Guerrilla Marketing: An Interview with Michael Serazio (Part Two)

You make an interesting argument here that today’s guerrilla advertising represents the reverse of the culture jamming practices of the 1980s and 1990s, i.e. if culture jamming or adbusting involved the highjacking of Madison Avenue practices for an alternative politics, then today’s branding often involves the highjacking of an oppositional stance/style for branding purposes. Explain.  

There have been various examples that have popped up here and there that hint at this hijacking: Adbusters magazine’s apparent popularity with ad professionals; PBR’s marketing manager looking to No Logo for branding ideas; heck, AdAge even named Kalle Lasn one of the “ten most influential players in marketing” in 2011.  Similarly, you see this subversive, counterculture ethos in the work of Crispin Porter + Bogusky, the premier ad shop of the last decade.  But I think the intersection goes deeper than these surface ironies and parallels.  There’s something about the aesthetics and philosophy of culture jamming that contemporary advertising finds enticing (especially when trying to speak to youth audiences): It resonates a disaffection with consumer culture; a streetwise sensibility; and so on.  For culture jammers, such stunts and fonts like flash mobs and graffiti art are political tools; for advertisers, they’re just great ways to break through the clutter and grab attention.  More abstractly, culture jammers see branding as an elaborate enterprise in false consciousness that needs to be unmasked toward a more authentic lived experience; guerrilla marketers, on the other, simply see culture jamming techniques as a way of reviving consumers from the “false conscious” of brand competitors.  Think different, in that sense, works equally well as an Apple slogan and a culture-jamming epigram.

 

You cite one advertising executive as saying, “friends are better at target marketing than any database,” a comment that conveys the ways that branding gets interwoven with our interpersonal relationships within current social media practices. What do you see as some of the long-term consequences of this focus on consumer-to-consumer marketing?

 

In a sense, the whole book – and not merely the friend-marketing schemes – is an exploration of how commercial culture can recapture trust amidst rampant consumer cynicism.  That’s what drives guerrilla marketing into the spaces we’re seeing it: pop culture, street culture, social media, and word-of-mouth.  These contexts offer “authenticity,” which advertisers are ever desperate to achieve given their fundamental governmental task is actually the polar opposite: contrivance.  (Sarah Banet-Weiser’s new book offers a sophisticated analysis of this fraught term across wide-ranging contexts in this regard.)  As far as long-term consequences go, I think it’s important to keep in mind the complicity of consumers in this whole process: In other words, being a buzz agent is still just a voluntary thing.  It’s not like these participants are being duped or exploited into participating.  It’s worth accounting for that and asking why shilling friends is acceptable in the first place.  Is it because of some kind of “social capitalism” wherein we already think of ourselves in branding terms and use hip new goods to show we’re in the marketplace vanguard?  The book is, of course, only a study of marketers not consumers, so it’s pure conjecture, but I think understanding that complicity is key to any long-term forecast of these patterns’ effects on our relationships and culture.

 

Both of our new books pose critiques of the concept of “the viral” as they apply to advertising and branding, but we come at the question from opposite directions. What do you see as the core problems with the “viral” model?

 

From my perspective, there’s an implicit (and not necessarily automatically warranted) populism that accompanies the viral model and label.  Viral success seems to “rise up” from the people; it has a kind of grassroots, democratic, or underground ethos about it.  In some cases, this is deserving, as we see when some random, cheap YouTube video blows up and manages to land on as many screens and in front of as many eyeballs as a Hollywood blockbuster which has all the promotional and distribution machinery behind it.  And because viral is supposedly underdog and populist, it’s “authentic,” so advertisers and brands naturally gravitate toward it, which, for me, makes it an intriguing object of study.  Abstractly speaking, that, too, is at the heart of the book’s inquiry and critique: The masquerades and machinations of powerful cultural producers (like advertisers) working through surrogate channels (like viral) that exude that authentic affect in different guises (here, populist).  Again, this is not to invalidate the genuine pluckiness of a “real” viral hit; it’s simply to keep watch on efforts to digitally “astroturf” that success when they show up.

 

While this blog has often treated what I call “transmedia storytelling” or what Jonathan Gray discusses as “paratexts” sympathetically as an extension of the narrative experience, you also rightly argue that it is an extension of the branding process. To what degree do you see, say, alternate reality games as an extension of the new model of consumption you are discussing in this book? Does their commercial motives negate the entertainment value such activities provide?

 

Oh, certainly not – and I should clarify here that I’m by no means taking the position that commercial motives necessarily negate the pleasure or creativity of participatory audiences.  Alternate reality games (or alternate reality marketing, as I call it) are, in a sense, the fullest extension of many of these practices, themes, and media platforms scattered throughout the book.  They feature outdoor mischief (e.g., flash mob-type activities) and culture jamming-worthy hoaxes, seek to inspire buzz and social media productivity from (brand) communities, and, above all, seem to be premised upon “discovery” rather than “interruption” in the unfolding narrative.  And the sympathetic treatments of their related elements (transmedia storytelling, paratexts) are assuredly defensible.  But they are, also, advertising – and, for my purposes here, they’re advertising that tries not to seem like advertising.  And, again, I believe in that self-effacement, much is revealed about today’s cultural conditions.

 

You end the book with the observation that “more media literacy about these guerrilla efforts can’t hurt.” Can you say more about what forms of media literacy would be desirable? What models of media change should govern such efforts? What would consumers/citizens need to know in order to change their fates given the claims about structure and agency you make throughout the book?

 

I suppose I end the book on a lament as much as a diatribe.  I’m not an abject brand-hater and I hope the book doesn’t come off that way.  That said, I certainly do empathize with the myriad critiques of advertising mounted over the years (i.e., its divisive designs on arousing envy, its ability to blind us to the reality of sweatshop labor, its unrealistic representation of women’s bodies, etc.).  The media literacy I aim for is awareness that these commercial forms are (often invisibly) invading spaces that we have not traditionally been accustomed to seeing advertising.  In general, brands don’t address us on conscious, rational terms and, thus, if we’re wooed by them, our subsequent consumer behavior is not necessarily informed as such.  In that sense, I guess, it’s as much a Puritan critique of commercialism as it is, say, Marxist.  Media literacy like this would encourage consumers to think carefully and deeply about that which advertisers seek to self-efface and to (try to) be conscious and rational in the face of guerrilla endeavors that attempt to obfuscate and bypass those tendencies.  The cool sell is an enticing seduction.  But we can – and do – have the agency to be thoughtful about it.

Thanks very much for the opportunity to discuss the book!

Michael Serazio is an assistant professor in the Department of Communication whose research, writing, and teaching interests include popular culture, advertising, politics, and new media.  His first book, Your Ad Here: The Cool Sell of Guerrilla Marketing (NYU Press, 2013), investigates the integration of brands into pop culture content, social patterns, and digital platforms amidst a major transformation of the advertising and media industries.  He has work appearing or forthcoming in Critical Studies in Media CommunicationCommunication Culture & CritiqueTelevision & New Media, and The Journal of Popular Culture, among other scholarly journals.  He received his Ph.D. from the University of Pennsylvania's Annenberg School for Communication and also holds a B.A. in Communication from the University of San Francisco and a M.S. in Journalism from Columbia University.  A former staff writer for the Houston Press, his reporting was recognized as a finalist for the Livingston Awards and has written essays on media and culture for The AtlanticThe Wall Street JournalThe Nation, and Bloomberg View.  His webpage can be found at: http://sites.google.com/site/linkedatserazio

Guerrilla Marketing?: An Interview with Michael Serazio (Part One)

Transmedia, Hollywood 4: Spreading Change. Panel 1 - Revolutionary Advertising: Creating Cultural Movements from UCLA Film & TV on Vimeo.

From time to time, I have been showcasing, through this blog, the books which Karen Tongson and I have been publishing through our newly launched Postmillenial Pop series for New York University Pop. For example, Karen ran an interview last March with Lucy Mae San Pablo Burns, author of of the series’s first book, Puro Arte: Filipinos on the Stage of Empire. This week, I am featuring an exchange with Michael Serazio, the author of another book in the series, Your Ad Here: The Cool Sell of Guerrilla Marketing, and I have arranged to feature an interview with the other writers in the series across the semester.

We were lucky to be able to feature Serazio as one of the speakers on a panel at last April's Transmedia Hollywood 4: Spreading Change conference, see the video above, where he won people over with his soft-spoken yet decisive critiques of current branding and marketing practices. Your Ad Here achieves an admirable balance: it certainly raises very real concerns about the role which branding and marketing plays in contemporary neo-liberal capitalism, calling attention to the hidden forms of coercion often deployed in approaches which seem to be encouraging a more "empowered" or "participatory" model of spectatorship. Yet he also recognizes that the shifting paradigm amounts to more than a rhetorical smokescreen, and so he attempts to better understand the ways that brands are imagining their consumers at a transformative moment in the media landscape. His approach is deeply grounded in the insider discourses shaping Madison Avenue, yet he also can step outside of these self-representations to ask hard questions about what it means to be a consumer in this age of converged and grassroots media.  I was struck as we were readying this book for publication that it was ideally read alongside two other contemporary publications -- Sarah Banet-Weiser's Authentic TM: The Politics of Ambivalence in Brand Culture (see my interview with Banet-Weiser last spring) and our own Spreadable Media: Creating Meaning and Value in a Networked Culture (co-authored with Sam Ford and Joshua Green). Each of these books comes at a similar set of phenomenon -- nonconventional means of spreading and attracting attention to messages -- through somewhat different conceptual lens.

You will get a better sense of Serazio's unique contributions to this debate by reading the two-part interview which follows.

You discuss the range of different terminology the industry sometimes uses to describe these emerging practices, but end up settling on “Guerrilla Marketing.” Why is this the best term to describe the practices you are discussing?

 

Conceptually, I think “guerrilla” marketing best expresses the underlying philosophy of these diverse practices.  To be certain, I’m appropriating and broadening industry lingo here: If you talk to ad folks, they usually only think of guerrilla marketing as the kind of wacky outdoor stunts that I cover in chapter 3 of the book.  But if you look at the logic of branded content, word-of-mouth, and social media strategies, you see consistent patterns of self-effacement: the advertisement trying to blend into its non-commercial surroundings – TV shows and pop songs, interpersonal conversations and online social networks.  Advertising rhetoric has long doted upon militarized metaphors – right down to the fundamental unit of both sales and war: the campaign. 

But when I started reading through Che Guevara’s textbook on guerrilla warfare, I heard parallel echoes of how these emerging marketing tactics were being plotted and justified.  Guerrilla warfare evolved from conventional warfare by having unidentified combatants attack outside clearly demarcated battle zones.  Guerrilla marketing is an evolution from traditional advertising (billboards, 30-spots, Web banners, etc.) by strategizing subtle ad messages outside clearly circumscribed commercial contexts.  Guerrilla warfare forced us rethink the meaning of and rules for war; guerrilla marketing, I would argue, is doing the same for the ad world.

 

Let’s talk a bit more about the concept of “clutter” that surfaces often in discussions of these advertising practices. On the one hand, these new forms of marketing seek to “cut through the clutter” and grab the consumers attention in a highly media-saturated environment, and on the other, these practices may extend the clutter by tapping into previously unused times and spaces as the focal point for their branding effort. What do you see as the long-term consequences of this struggle over “clutter”?

 

Matthew McAllister had a great line from his mid-1990s book that tracked some of these same ad trends to that point: “Advertising is… geographically imperialistic, looking for new territories that it has not yet conquered.  When it finds such a territory, it fills it with ads – at least until this new place, like traditional media, has so many ads that it becomes cluttered and is no longer effective as an ad medium.”  I think this encapsulates what must be a great (albeit bitter) irony for advertisers: You feel like your work is art; it’s all your competitors’ junk that gets in the way as clutter. 

As to the long-term fate of the various new spaces hosting these promotional forms, I don’t have much faith that either media institutions or advertisers will show commercial restraint if there’s money to be made and eyeballs to be wooed.  I think eventually pop culture texts like music tracks and video games will be as saturated as film and TV when it comes to branded content; journalism, regrettably, seems to be leaning the same direction with the proliferation of “native advertising” sponsored content.  Facebook and Twitter have been trying to navigate this delicate balance of clutter – increasing revenues without annoying users – but here, too, it doesn’t look promising. 

Maybe if audiences wanted to pay for so much of that content and access which they’ve grown accustomed to getting for free, then maybe clutter is not the expected outcome here, but I’m not terribly sanguine on that front either.  The one guerrilla marketing tactic I don’t see over-cluttering its confines is word-of-mouth just because as a medium (i.e., conversation) that remains the “purest,” comparatively, and it’s hard to imagine how that (deliberate external) commercial saturation would look or play out.

 

There seems to be another ongoing tension in discussions of contemporary media between a logic of “personalization” and individualization on the one hand and a logic of “social” or “networked” media on the other. Where do you see the practices you document here as falling on that continuum? Do some of these practices seem more individualized, some more collective?

 

Really interesting question and here I’ll borrow Rob Walker’s line from Buying In on the “fundamental tension of modern life” (that consumer culture seeks to resolve): “We all want to feel like individuals.  We all want to feel like a part of something bigger than our selves.” 

The guerrilla marketing strategies that are showing up in social media probably best exemplify this paradox.  On one hand, brands want to give fans and audiences both the tools for original self-expression and simultaneously furnish the spaces for that networked socialization to take root.  On the other hand, all that clearly needs to be channeled through commercial contexts so as to achieve the “affective economics” that you identified in Convergence Culture

I look at something like the branded avatar creation of, say, MadMenYourself.com, SimpsonizeMe.com, or Office Max’s “Elf Yourself” online campaign as emblematic pursuits in this regard.  The “prosumer” can fashion her identity through the aesthetics of the brand-text (i.e., personalization) and then share it through their social networks (i.e., it’s assumed to be communally useful as well).  But, as I note in a forthcoming article in Television & New Media, these tools and avenues for expression and socialization are ultimately limited to revenue-oriented schemes – in other words, corporations are not furnishing these opportunities for self-discovery and sharing from an expansive set of possibilities.  They’re only allowed to exist if they help further the brand’s bottom line.

Michael Serazio is an assistant professor in the Department of Communication whose research, writing, and teaching interests include popular culture, advertising, politics, and new media.  His first book, Your Ad Here: The Cool Sell of Guerrilla Marketing (NYU Press, 2013), investigates the integration of brands into pop culture content, social patterns, and digital platforms amidst a major transformation of the advertising and media industries.  He has work appearing or forthcoming in Critical Studies in Media CommunicationCommunication Culture & CritiqueTelevision & New Media, and The Journal of Popular Culture, among other scholarly journals.  He received his Ph.D. from the University of Pennsylvania's Annenberg School for Communication and also holds a B.A. in Communication from the University of San Francisco and a M.S. in Journalism from Columbia University.  A former staff writer for the Houston Press, his reporting was recognized as a finalist for the Livingston Awards and has written essays on media and culture for The Atlantic, The Wall Street Journal, The Nation, and Bloomberg View.  His webpage can be found at: http://sites.google.com/site/linkedatserazio

A Whale Of A Tale!: Ricardo Pitts-Wiley Brings Mixed Magic to LA

Last February, I announced here the release of Reading in a Participatory Culture, a print book, and Flows of Reading, a d-book extension, both focused around work my teams (first at MIT and then at USC) have done exploring how we might help educators and students learn about literary works through actively remixing them. Our central case study has been the work of playwright-actor-educator Ricardo Pitts-Wiley from the Mixed Magic Theater, who was successful at getting incarcerated youth to read and engage with Herman Melville's Moby-Dick by having them re-imagine and re-write it for the 21st century. You can read more about this project here. And you can check out the Flows of Reading d-book for free here. 
If you live in Los Angeles, you have a chance to learn more about Pitts-Wiley and his work first hand. I've been able to bring Ricardo for a residency at USC this fall, which will start with a public event at the Los Angeles Public Library on September 26. Ricardo is going to be recruiting a mixed race cast of high school and college aged actors from across the Los Angeles area and producing a staged reading of his play, Moby-Dick: Then and Now, which will be performed as part of a USC Visions and Voices event on Oct. 11th. You can get full details of both events below. I hope to see some of you there. We are already hearing from all kinds of artists here in Southern California who have sought creative inspiration from Melville's novel and used it as a springboard for their own work. But you don't have to love the great white whale to benefit from our approach to teaching traditional literary works in a digital culture, and we encourage teachers and educators of all kinds to explore how they might apply our model to thinking about many other cultural texts.
For those who live on the East Coast, our team will also be speaking and doing workshops at the National Writing Project's national conference in Boston on Nov. 21.
Thursday, September 26, 2013 7:15 PM
Mark Taper Auditorium-Central Library
Thu, Sep 26, 7:15 PM [ALOUD]
Remixing Moby Dick: Media Studies Meets the Great White Whale 
Henry Jenkins, Wyn Kelley, and Ricardo Pitts-Wiley

Over a multi-year collaboration, playwright and director Ricardo Pitts-Wiley, Melville scholar Wyn Kelley, and media expert Henry Jenkins have developed a new approach for teaching Moby-Dick in the age of YouTube and hip-hop. They will explore how "learning through remixing" can speak to contemporary youth, why Melville might be understood as the master mash-up artist of the 19th century, and what might have happened if Captain Ahab had been a 21st century gang leader.

* Part of the Library Foundation of Los Angeles and Los Angeles Public Library’s month-long citywide initiative "What Ever Happened to Moby Dick?"

 

Henry Jenkins is Provost's Professor of Communication, Journalism, and Cinematic Arts at the University of Southern California. He has written and edited more than fifteen books on media and popular culture, including Spreadable Media: Creating Meaning and Value in a Networked Culture with Sam Ford and Joshua Green. His other published works reflect the wide range of his research interests, touching on democracy and new media, the “wow factor” of popular culture, science-fiction fan communities, and the early history of film comedy. His most recent book, Reading in a Participatory Culture: Remixing Moby-Dick for the Literature Classroom was written with Wyn Kelley, Katie Clinton, Jenna McWilliams, Erin Reilly, and Ricardo Pitts-Wiley.

Wyn Kelley teaches in the Literature Section at the Massachusetts Institute of Technology, and is author of Melville's City: Literary and Urban Form in Nineteenth-Century New York and of Herman Melville: An Introduction. She also co-author Reading in a Participatory Culture: Re-Mixing Moby-Dick in the English Classroom with Henry Jenkins and Ricardo Pitts-Wiley. She is former Associate Editor of the Melville Society journal Leviathan, and editor of the Blackwell Companion to Herman Melville. A founding member of the Melville Society Cultural Project, she has collaborated with the New Bedford Whaling Museum on lecture series, conferences, exhibits, and a scholarly archive. She serves as Associate Director ofMEL (Melville Electronic Library), an NEH-supported interactive digital archive for reading, editing, and visualizing Melville’s texts.

Ricardo Pitts-Wiley is the co-founder of the Mixed Magic Theatre, a non-profit arts organization dedicated to presenting a diversity of cultural and ethnic images and ideas on the stage. While serving as Mixed Magic Theatre’s director, Pitts-Wiley gained national and international acclaim for his page-to-stage adaptation of Moby Dick, titled Moby Dick: Then and Now. This production, which was presented at the Kennedy Center for the Arts in Washington, DC, is the centerpiece of a national teachers study guide and is featured in the book, Reading in A Participatory Culture. In addition to his work as an adapter of classic literature Pitts-Wiley is also the composer of over 150 songs and the author of 12 plays with music including:Waiting for Bessie SmithCelebrations: An African Odyssey, andThe Spirit Warrior’s Dream.

Building Imaginary Worlds: An Interview with Mark J. P. Wolf (Part Four)

You make an important observation about the nature of sequels here: “A trade-off between novelty and familiarity occurs: The world is no longer new to the audience, but the burden of exposition is lessened by what has already been revealed of the world.” So, one could argue that the creator of the sequel is freed to do different things than the creator of the original. If this is true, what are some examples of creators who have effectively exploited that freedom to do something interesting?

Contrary to the negative image many people have when it comes to sequels, some sequels are generally considered better than the original works they follow.  In my experience, anyway, people often say they like The Empire Strikes Back (1980) better than Star Wars (1977); The Lord of the Rings certainly builds interestingly on the world introduced in The Hobbit; and Riven (1997) is arguably better than Myst (1993).  In all three cases, the sequel expands the world a great deal, while carrying forth characters and situations established in the original.  Each also builds upon how the world works, extending the world logic from the original work (for example, the rules regarding the functioning of lightsabers, the One Ring, and linking books).

Authors may find it easier to develop an existing world than begin an entirely new one, and expansions can still introduce as much (or more) new world material as the original work.  Practically speaking, this can also have to do with economics; a longer book or a higher-budget movie is more of a financial risk to a publisher or studio, so original works that introduce a world may be more likely to be represented by shorter books and lower-budget movies, while sequels to a successful work will be given more investment (for example, Pitch Black (2000) had a budget of $23 million, while its sequel, The Chronicles of Riddick (2004) had a budget of over $100 million).  Whatever the case, world expansion is necessary for a sequel to continue to engage an audience, who wants to revisit the world they like, but at the same time, have new experiences in it.

The Star Trek film this summer was controversial precisely because of its ret-conning practices: it’s restaging and reimagining of the Wrath of Khan narrative seems to have been embraced by some new fans, but at the cost of much gnashing of teeth by veteran fans (myself among them). What does this example tell us about the challenges or opportunities in revising or revisiting familiar worlds within popular media?

 In the case of Star Trek, they wanted to have it both ways; a fresh start, while keeping continuity with what has come before.  Technically speaking, the 2009 film was not a reboot, due to the “alternate timeline” device (complete with “Spock Prime”, with a Nimoy cameo that may have been thrown in to suggest to fans that if Nimoy accepts this film enough to appear in it, then you ought to accept it, too), but most people still consider it a reboot.  Like you, I, too, found this recasting of Kirk, Spock, and McCoy annoying; why not move on to a new set of characters, as the TV shows did, and mostly successfully at that?  And the motive here is not the best, either.

Retconning, even when not taken to the extreme of a reboot, may be done for artistic or narrative reasons (as with Tolkien’s adjusting of The Hobbit so as to fit The Lord of the Rings better, or Lucas’s clean-up and added effects shots in the “Special Edition” re-releases), but when it’s done for purely commercial reasons, it may seem more suspect, since money, not art, is the motive.  Of course, the degree to which retcon occurs also makes a difference; cosmetic changes are more easily accepted than those which change the story or characters too directly or too much.It’s also usually easier to reboot a character-based franchise than a world-based franchise, because the background world is closer to the Primary world to begin with, and reboots are usually concerned with bringing things up-to-date more than anything else; changes in the Batman, Spider-man, and Superman franchises are a good example of this.

But in more world-centered franchises, like Star Trek and Star Wars, you don’t need to have the same characters present all the time (Star Trek shows remained on TV 1987 to 2005 without relying on Kirk, Spock, and McCoy); it’s the world, with all its designs, devices, locations, and logic, that keeps the audience engaged.  So one can only hope that Abrams doesn’t reboot Star Wars.

 

While as the book jacket informs us, you reference more than 1400 imaginary worlds across the book, certain worlds keep recurring, among them Oz, Middle-earth, and the worlds associated with Star Wars. What makes these worlds especially rich for your analysis?

Certain worlds are larger and better developed, so when it comes to discussions of infrastructures, internarrative theory, sequence elements, and so forth, there’s just more to work with and examine in larger worlds.  Their infrastructures have thousands of pieces, and some are still growing.  The worlds of Oz, Middle-earth, and Star Wars are also very influential ones, and ones that helped forge the imaginary world tradition.  And, when using them for examples, they are also worlds that the readers of the book will likely already be familiar with, and which require less explanation.  But there are so many interesting ones that are less known, and deserve more attention (like Defontenay’s Star and Dewdney’s Planiverse), and I try to highlight some of these in the book as well.

You revisit the familiar debate between interactivity and storytelling (or narratology and ludology as it is sometimes framed in game studies), which seems to be unresolvable. But, you suggest that there are ways to reconcile competing claims for game play and world building/exploring. Can you elaborate a bit? How might this focus on immersion offer us a way of reframing these core issues in Game Studies (which is a field where you have made many contributions already)?

Video games and the experiencing of imaginary worlds both involve such activities as world exploration and the learning of a set of intrinsic rules and how things work, and in many cases, piecing together a backstory and seeing how actions relate to consequences.  So interactivity is perhaps more compatible with the experiencing of a world than it is with the telling of a story.  It’s the difference between causality and a causal chain of events.  You can have causality built into a world, so that one kind of action results in a particular result, but due to choices made or chance, you don’t always get the same sequence of events.  A fixed causal chain of events, like that found in a novel, stays the same and doesn’t change.  But a world can accommodate either.

The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author.  So it’s not just a question of how many choices you make, but how many options there are per choice.  Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.  And it isn’t just immersion, but what happens after it as well (that is, absorption, saturation, and overflow, which are discussed in detail in chapter one).  By focusing on worlds and the experience of them, video games studies will be better able to make comparisons between game experiences, and describe them in a more world-based fashion.

 

Your term, “deinteractivation” may be somewhat awkward, but it also provides the best explanation I’ve seen for why it has been so difficult to translate games into other kinds of media experiences -- for example, films or comics. Defiance represents an interesting recent experiment. To what degree do you think it has been successful at creating a world that bridges between games and television?

 

Although long, I believe “deinteractivation” is a morphologically-sound coinage, related as it is to words like “deactivation” and “interact”, and no more clumsy than the words one finds in polysynthetic languages like Ainu, Chukchi, or Yupik. Hopefully its etymology is clear enough to indicate that it means the removal of interactivity, something that occurs when a world or narrative makes a transmedial move from an interactive medium to a noninteractive medium.  (As a form of adaptation, it is something we will see more of as video games make transmedial moves into other media.)

In the case of Defiance, where both the TV show and the game were developed together, I would say that the world appears to have been designed with both media in mind already, rather than beginning as a game that was later adapted into a TV series.  But it will be interesting to see which survives longer, the game or the show, and how they influence each other.

Also, the fact that the game is an MMORPG means that it is more interactive than a stand-alone game, but also more vulnerable to the ravages of time; while a stand-alone game, just like a TV show, could be experienced in its entirety at some future date, an MMORPG can really only be experienced as it was during the time it is being run; once it ends, it’s over, and it cannot be re-experienced.  It may be too soon to know whether it successfully bridges the two medium, or creates a solid fan community around the show/game experience (will other media venues follow?)

Perhaps we can get some idea from the text on the Defiance website: “Players of the Defiance game will have the chance to cross over into the show in a new and bigger way: By seeing their avatar recreated as a character in the second season!” First, this conflates players and their avatars, and second, the avatar will be “recreated” as a character; so it appears all interactivity has been lost during these two removes.  One can imaging other ways an MMORPG and a TV show could be linked; for example, a TV show could be made from the events of the MMORPG as they occur, or perhaps the storyline of the TV show could be adjusted based on the events of the MMORPG (the game events could provide the ever-changing background for the TV characters’ lives; if a revolution is stirred up in the MMORPG, it occurs in the background of the TV show, affecting the characters, and so on).  If nothing else though, and despite the outcome, Defiance is an interesting experiment, its name perhaps referring to its attitude toward standard television fare.

Mark J. P. Wolf is a Professor in the Communication Department at Concordia University Wisconsin.  He has a B. A. (1990) in Film Production and an M. A. (1992) and Ph. D. (1995) in Critical Studies from the School of Cinema/Television (now renamed the School of Cinematic Arts) at the University of Southern California.  His books include Abstracting Reality: Art, Communication, and Cognition in the Digital Age (2000), The Medium of the Video Game (2001), Virtual Morality: Morals, Ethics, and New Media (2003), The Video Game Theory Reader (2003), The World of the D’ni: Myst and Riven (2006), The Video Game Explosion: A History from PONG to PlayStation and Beyond (2007), The Video Game Theory Reader 2 (2008), Before the Crash: An Anthology of Early Video Game History (2012), the two-volume Encyclopedia of Video Games: The Culture, Technology, and Art of Gaming (2012), Building Imaginary Worlds: The Theory and History of Subcreation (2012), The Routledge Companion to Video Game Studies (forthcoming), Mister Rogers' Neighborhood (forthcoming), Video Games Around the World (forthcoming), and LEGO Studies: Examining the Building Blocks of a Transmedial Phenomenon (forthcoming) and two novels for which he has begun looking for an agent and publisher.  He is also founder and co-editor of the Landmark Video Game book series from University of Michigan Press.  He has been invited to speak in North America, Europe, Asia, and Second Life, and is on the advisory boards of Videotopia and the International Journal of Gaming and Computer-Mediated Simulations, and on several editorial boards including those of Games and Culture, The Journal of E-media Studies, and Mechademia: An Annual Forum for Anime, Manga and The Fan Arts.  He lives in Wisconsin with his wife Diane and his sons Michael, Christian, and Francis.  [mark.wolf@cuw.edu]

Building Imaginary Worlds: An Interview With Mark J. P. Wolf (Part Two)

There is a tendency for critics to dismiss sequels and prequels as being driven almost entirely by commercial motives. Yet, you show here that such structures have a much longer history. What does this history tell us about other motives that might drive such devices?

Sequels and prequels (and other kinds of sequence elements) are seen as commercially attractive in the first place only because there are other motives for wanting them to begin with; if not, then why would they be thought of as having commercial potential? I think the main reason for wanting them is the idea of returning for more visits to a world that you like, whether you are the audience or the author. That’s really the only reason there is; if you don’t like the world enough to want to go back, there’s no reason to make another work which is set there. One can experience the same work multiple times (as happens with works like Star Wars (1977) or The Lord of the Rings (1954-1955)), which may be rich enough in detail to require multiple visits in order to notice everything, but ultimately audiences will want new experiences within the same world.

Authors who world-build will often create a world to house the original story set there, but if they like world-building, they may go beyond the needs of the story, generating more and more of the world, eventually developing additional narratives set in the world, usually ones connected in some way to the original narrative. The same thing can happen with popular characters that people want to hear more about; a rather early sequel, the second part of Cervante’s Don Quixote (part one, 1605; part two, 1615), was written in response to the audience’s desire to read more of Quixote’s adventures, and particularly because a spurious sequel had appeared after the first volume was published (the spurious sequel is even mentioned within the second volume of Don Quixote, and is condemned by the characters as noncanonical).

So I think commercial potential can only exist if other motives already exist; although one could try to make a sequel to a failed work set in an unpopular world, such a work is also likely to be a commercial failure (even if it is better than the original, its success could actually be hurt by its association with the original, if the original is disliked enough).

You note throughout that world-building (and world-exploring on the part of the audience) can be an activity which is meaningful quite apart from its role in a particular story. There may well be films -- Avatar come to mind -- which are widely criticized for their stories but widely praised for their world-building, and there are certainly directors -- Tim Burton for example -- who consistently seem more interested in exploring worlds than in telling coherent stories. Why, then, do we tend to devalue world-building in favor of story-telling when we evaluate so many media and literary texts?

 In works set in the Primary world (which are arguably the dominant kind), world-building mainly exists to serve storytelling, not the other way around. Thus, world-building is seen as a background activity, something done not for its own sake, and something done only to the extent necessary for a story to be told. As such, most critics’ methods of analysis still center around story (and such things as character, dialogue, and events) as the way meaning is conveyed, and many are intolerant of anything that departs too far from the “realism” of the Primary world (Tolkien notes this when he points out how calling something “escapist” is considered by many as an insult, whereas he says this is confusing the “escape of the prisoner” with the “flight of the deserter”; but this is another issue altogether).

But, of course, character, dialogue, and events are not the only ways meaning is conveyed; only the most obvious ways.

A world’s default assumptions, which differ from the Primary world, can suggest new ways that something can be considered, and perhaps make an audience more aware of their own assumptions they normally take for granted. Just like encountering other cultures can help you become more aware of your own culture, and make you realize that there are other ways of doing things or other ways to live, imaginary worlds can comment on the Primary world through their differences, they can embody other ideas and philosophies, and convey meaning in a variety of ways beyond the traditional ways found in stories set in the Primary world.

But these effects are the most powerful when the worlds in question have a high degree of completeness and consistency, in order to be believable; when this is lacking, the world may be risk being rejected as too outlandish and merely silly. And enough worlds are that way, that critics may regard even good ones suspiciously. The popularity of The Lord of the Rings is still not understood by some literature faculty, and likewise, some critics of the Star Wars prequel trilogy wonder why the films were so popular.

Stories and Worlds are evaluated by different sets of criteria, and one cannot simply apply one to the other. Sometimes a story is only there to serve the purpose of providing a framework with which to experience a world, as is the case, I would argue in a movie like Titanic or a book like The Planiverse (1984). While we generally do not fault good stories which don’t involve much world-building (since one has the Primary world to fall back upon, when the author does not provide invention), worlds, even elaborate ones, are often faulted if they do not contain good stories.

Of course, a world is always more enjoyable if the stories set in it are good ones, but some stories are clearly vehicles to convey a vicarious experience of a world and nothing more; once this is realized, one can set aside narrative expectation and focus on the world for what it is. General audiences seem to be able to do this rather well, especially in cinema, where the experience of visiting a world is made vivid through concrete imagery and sound, or in literary genres like fantasy and science fiction. Now that worlds and world-building are more prominent in culture, and particularly in popular culture, and with the rise of media like video games in which the experiencing of a world can be done with little or no narrative, critical criteria may begin to change to recognize the merits of well-built worlds.

Given what we’ve said above, what might be the criteria by which we would evaluate a text based on its world-building capacity?

 Well, as I discuss in chapter three, one way to examine the depth to which a world has been built is to examine the degree to which its infrastructures (such as maps, timelines, genealogies, nature, cultures, languages, mythologies, and philosophies) have been developed, and the interconnections between these infrastructures (which involves examining their degree of invention, completeness, and consistency). The more developed a world is, with these criteria in mind, the more we have something which appears viable, and the more we can extrapolate a world’s logic to fill in missing details, making the world seem more complete than it actually is (a well-designed world makes it so easy to fill in missing details that we may do so without even consciously noticing that we are doing so).

And it is not simply a question of the quantity of details, but their quality as well; their aesthetics must be appealing, in one way or another (this does not always mean something is beautiful) and the ideas embodied within a world must be engaging as well. Good stories will still always help the enjoyment of a world, since vicarious experience relies on character identification to some degree; one can marvel at a work of world-building but not feel one is within a world; and one could argue that such vicarious experience need not be a criterion of greatness.

But most audiences will still want such experiences. When one looks at the worlds that have endured over the years, or have found great popularity, one will find many of these elements are usually present; although some excellent and well-built worlds, like those of Defontenay’s Star (1854) or Wright’s Islandia (1942) remain obscure despite their greatness, but will perhaps hopefully gain the respect they deserve, now that world-building is becoming more valued.

 As I am writing these questions, I just saw Pacific Rim. Here, the filmmaker sets himself a challenge in creating a totally new franchise in introducing the viewer into a fictional world and establishing its basic contours. In this case, of course, he is not creating in a vacuum, since he relies heavily on audience familiarity with conventions from Japanese popular media -- the Mecha and giant monster genres. Yet, it can still be challenging to make sense of what we are seeing on screen given the range of unfamiliar objects and creatures going at it at once. What might Pacific Rim teach us about the challenges of introducing new worlds to audiences?

While Pacific Rim does feature some futuristic cities and a single glimpse of a new planet (near the very end of the film), it is still mainly set in the Primary world (our world) and not all that far into the future, so the world-building that occurs is mainly on a more local level (like the Shatterdomes).  But as you say, there’s much in the way of new technologies and creatures appearing on-screen, combined with a rapid pace.  And I also found it interesting that the film was neither based on an existing franchise or constructed as a star vehicle; it seems to have been made and marketed strictly on its own merits (although special effects were highlighted as an audience draw).  Also, as you mention, most of what we need to learn to follow the story relies on established science fiction conventions, along with the film’s two new terms naming its combatants (Kaijus and Jaegers) literally defined up front.

So while I liked the fact that the film starts a new franchise, it seems as though doing so made the filmmakers (most likely, the producers) cautious to do anything too far removed from established conventions. As a result, the film has relatively little innovation and there seems to be little reason to see it a second time; everything seems clearly explained, leaving few, if any, unsolved enigmas.

While concentrating on making the story clear is fine, there could have been more extensions of the world beyond what was required to tell the story; there is not a lot of background detail and action that would warrant additional screenings, and very little world outside the activities surrounding the Kaijus and Jaegers themselves.  As the Star Wars films have shown, though developing and creating such additional background world material may raise the cost of a film, the richness that it adds to a world will make audiences want to return again, inviting speculation and perhaps even generating enigmas that need not be solved for the narrative to be complete.  The world of Pacific Rim, on the other hand, does not seem to extend far beyond the needs of the narrative.

 

Mark J. P. Wolf is a Professor in the Communication Department at Concordia University Wisconsin.  He has a B. A. (1990) in Film Production and an M. A. (1992) and Ph. D. (1995) in Critical Studies from the School of Cinema/Television (now renamed the School of Cinematic Arts) at the University of Southern California.  His books include Abstracting Reality: Art, Communication, and Cognition in the Digital Age (2000), The Medium of the Video Game (2001), Virtual Morality: Morals, Ethics, and New Media (2003), The Video Game Theory Reader (2003), The World of the D’ni: Myst and Riven (2006), The Video Game Explosion: A History from PONG to PlayStation and Beyond (2007), The Video Game Theory Reader 2 (2008), Before the Crash: An Anthology of Early Video Game History (2012), the two-volume Encyclopedia of Video Games: The Culture, Technology, and Art of Gaming (2012), Building Imaginary Worlds: The Theory and History of Subcreation (2012), The Routledge Companion to Video Game Studies (forthcoming), Mister Rogers' Neighborhood (forthcoming), Video Games Around the World (forthcoming), and LEGO Studies: Examining the Building Blocks of a Transmedial Phenomenon (forthcoming) and two novels for which he has begun looking for an agent and publisher.  He is also founder and co-editor of the Landmark Video Game book series from University of Michigan Press.  He has been invited to speak in North America, Europe, Asia, and Second Life, and is on the advisory boards of Videotopia and the International Journal of Gaming and Computer-Mediated Simulations, and on several editorial boards including those of Games and Culture, The Journal of E-media Studies, and Mechademia: An Annual Forum for Anime, Manga and The Fan Arts.  He lives in Wisconsin with his wife Diane and his sons Michael, Christian, and Francis.  [mark.wolf@cuw.edu]

Raising Children in the Digital Age: An Interview with Lynn Scofield Clark (Part Three)

Your book is full of evocative phrases and concepts. One of my favorite is that of “emotional downsizing.” When and where does “emotional downsizing” occur and what does it tell us about the context in which contemporary parenting occurs?

I used the term "emotional downsizing" to talk about parental expectations regarding family life and how media fit into these expectations. This comes up in a specific example about a mother who talks about how she wishes that her family could do more activities together, but they don't due to the time pressures they face (the parents have demanding jobs and the teen and preteen children have school, activities, and for the younger child, time in child care rather than at home). The mother wished that they could engage in different kinds of activities together - like hiking or playing board games together - that would require them to be "unplugged." Yet sometimes, the pressures of everyday life meant that she needed to lower her expectations about what was realistic and possible. This is how "movie night," while not a preferred activity for this mom (and for many of the parents I interviewed), became nevertheless a positive instance of "family time." Doing something together, even if it's a less parentally approved activity, is still worthwhile and sometimes it's the best we can do in what can be an exhausting schedule of family life. Parents therefore lower their expectations of an "ideal" family activity, or engage in emotional downsizing, coming to see the up side of engaging in mediated activities together.

Incidentally I discovered after writing my book that I use this term in a way that differs from sociologist Arlie Hochschild's use of it, although I refer to her work on family life throughout my book (e.g., I use her term "emotion work" to talk about what parents go through when justifying the decisions they make in relation to emotions rather than rational decision-making). In her book The Time Bind, Hochschild uses the phrase "emotional downsizing" to refer to what happens when parents assume that their children need them less than they do, which is followed by "emotional outsourcing," or leaving children in the supervision of hired caregivers. I observed both of these, but I wanted to highlight how television, movies, YouTube sharing and other mediated leisure activities - often discussed as less desirable than other activities - come to be part of something that family members view positively as "family time."

At a time when many of us are writing about the values of “connected learning,” your book offers a “reality check.” What kinds of obstacles or challenges do you see in trying to create richer educational opportunities for youth through the informal learning sector or for connecting what takes place in the home with school-based learning?

That is a great question. U.S. families across the economic spectrum are so busy these days, whether that's due to work and activities in the best of situations, or due to the chronic health issues, doctor's visits, and inconvenient transportation and work schedules that tend to be part of the most challenging family experiences. I love the ideas involved in connected learning: the interest-powered, peer-supported, and academically oriented learning principles and the production-centered, openly networked, and shared purpose design principles. But I do see two key issues.

First, both parents and young people need to see how connected learning is in the interests of the young people themselves. This is obviously the point of developing case studies that demonstrate the effectiveness of learning in places like Quest2Learn and the Digital Youth Network. These will demonstrate that connected learning helps young people develop skills and literacies they will need to survive in education and beyond.

But secondly, both parents and young people need to see how connected learning is consistent with their goals as a family. How can programs of connected learning give parents opportunities to share their values and life experiences with their children? How can programs of connected learning help young people to feel that their experiences and perspectives are valued by their parents? Of course, connected learning isn't a "program" so much as an approach, but parents may need to see specific programs in order to recognize how it is that their child's school wants them to engage and will value their life experiences and familial goals in the process. I think that embracing a family-centric approach will move "connected learning" out of the headspace of "homework" or "youth after school activities" and into the space that I think the connected learning innovators want to go, which involves strengthening bridges between home and school life.

While the book is primarily descriptive of a range of different models of parenting in the digital age, you end with some normative advice about the ways parents might improve upon the quality of experiences they have with digital and mobile technologies. What philosophical commitments govern this advice for you?

I wanted to avoid giving very specific advice about hours spent in front of screens or with mobile devices. Instead, going back to your first question, I wanted parents to be able to think about the "parent app" that best fit their own situation and needs. For me, I think my primary philosophical commitments are to the inherent worth and dignity of every person and to the interconnectedness of all people and living beings of nature. I believe that we each need relationships of trust, mutuality, and compassion to survive, and we each have responsibilities to act in ways that foster those relationships. Maybe this is especially so in our primary relationships with our families. So I wanted to end the book with some suggestions rooted in the idea that all of us share a desire for meaningful relationships of mutuality and respect. I have a longer list in the conclusion, so here's the edited version:

1. Be clear and fair about expectations regarding digital and mobile media, but be willing to change as children grow older and their needs change 2. Model the behavior you want, which includes prioritizing time together 3. Let children take the lead in teaching you about their media lives

But I also didn't want to lose sight of the fact that for a lot of people, our experiences are related to and limited by not just what we can choose to do, but our cultural and social environment. So, I wanted to propose that collectively parents can work with others to shape an environment that better meets our desires for trust, mutuality, and compassion.

Thus, in relation to the bigger picture: 1. Change the situation for young people 2. Change the media to change the culture

As I write at the end of the book, the digital and mobile media that are so much a part of our lives may seem inevitable, but the particular forms they take and the organizational patterns governing the industries that make and distribute them are not. It is up to us to choose how these media will fit into our collective lives and how they will shape the lives of our children and families in the future.

 

Lynn Schofield Clark is Associate Professor, Director of the Estlow International Center for Journalism and New Media, and Interim Chair of the Media, Film, and Journalism Studies department at the University of Denver.  In addition to co-parenting two teens, she is author of The Parent App: Understanding Families in a Digital Age (Oxford U Press, 2012), From Angels to Aliens: Teenagers, the Media, and the Supernatural (Oxford U Press, 2005), and co-author with Stewart Hoover, Diane Alters, Joe Champ, and Lee Hood of Media, Home, and Family (Routledge, 2004).  She teaches qualitative research methods and journalism courses, and is currently involved in a community engaged youth participatory action study of news and story-sharing among high school aged recent immigrants to the U.S..

Raising Children in the Digital Age: An Interview with Lynn Scofield Clark (Part Two)

Another core theme running through the book has to do with different experiences and expectations about media depending on the economic class background of parents. How would you characterize those differences?

I describe two different ethics that guide family approaches to digital and mobile media: an ethic of respectful connectedness, and an ethic of expressive empowerment. I'm really building on a lot of work in sociology of the family in this area (see, e.g., Annette Lareau and Allison Pugh as well as Roger Silverstone, each of whom looks at how family economics shape everyday experiences). The term "ethic" is meant to signal that there are guiding principles that help parents and young people determine a course of action in relation to communication practices. I use the phrase "Ethic of expressive empowerment" to refer to those families that seek to use the media for education and self-development, and the phrase "Ethic of respectful connectedness" to refer to those families that want to use media in ways that honor parents and reinforce family and cultural ties.

The differences are most stark at the extremes. The ethic of expressive empowerment can lead parents to think of their children as in need of constant guidance and oversight. When parents assume that they need to ensure the most empowering activities and the most appropriate forms of expression for their children at all times, they can rather easily slip into using technologies for covert helicopter parenting.

On the other hand, parents who are very concerned about the ways that technology use might undermine respect for parents can be drawn to a sort of "tough love" approach, using their children's social networking accounts to engage in publicly humiliating their children as a means of demanding respect, or being quite restrictive and "strict" about technology use.

Most parents fall between these two extremes, but each approach seems in some ways related to class-based ways of thinking about risk and technology. Upper income families in my study worried that their child might miss some opportunity that would secure their ability to compete in the increasingly merciless economic environment, and this drives the desire to oversee appropriate uses of time spent with technologies (and hence also supports covert helicopter parenting). Lower income families worry about their children's futures as well, but because many in my study had experienced the failures of society's institutions, they place more trust in close relations - which is why undermining respect for one's closest family members can be so threatening (and why engaging in a "tough love" response of public humiliation or strong restrictions on technology seems appropriate).

I wanted to outline these different approaches not so much to tie one or another specifically to class, but to highlight the idea that not all families have the same concerns about how technologies are playing a role in the lives of their young people. I think that many of us in education tend to embrace an ethic of expressive empowerment and so we see the positive potential in technologies. But I wanted offer some clues as to how counselors, educators, and parent advocates might discuss technology and its risks in family life in relation to differing ethics that frame a family's course of action.

You try to challenge and complicate prevailing myths about cyber-bullying. What advice do you have for parents who are concerned that their children may be being bullied?

First of all, parents need to resist the urge to jump in and "save" the child. Ultimately, our goal as parents is to raise children who have resilience. We parents need to see ourselves as resources who can help our children solve their own problems. We do this when we talk with them about different strategies of response and tell our own stories of how we respond when we feel bullied or harassed.

Of course, some incidents escalate beyond what a young person might be able to address on his or her own.

I've been doing another study specifically on cyberbullying among teens, and one of the things I've found is that teens don't like the term "cyberbullying." "That's what happens to younger kids," as several high school students told me. They prefer the term cyberharrassment, which suggests the seriousness of the issue.

And so I also really like Common Sense Media's approach to cyberbullying and in my book I echo what they suggest. It's important for parents to encourage their children to stand up, not just stand by when they witness such harassment, and it's equally important for those who are victimized to seek sources of support so that they are standing with others in response to the perpetrator.

You acknowledge throughout the book that some of your findings push against your own values as a parent. What would be some examples where you were forced to question your own assumptions about good parenting?

Even though I think of myself as someone who loves to spend time with my children, writing this book made me realize that this often comes into conflict with my sense that part of being a good parent is balancing work and home life appropriately. When it comes to children, there's really no balancing or multitasking, there's just the attention you can focus on one thing or another at any given time. In other words, if I really want to spend time with my children, I've got to put away my laptop and phone. And I've also decided to be much more intentional about spending time doing media-related things with them. Fortunately, we all like the Just Dance 2 DVD we received from a grandparent over the holidays!

In your discussion of teen’s online play with identity, you introduce the concept of “interpretive reproduction.” Can you explain this concept and discuss what it helps us to see about teen’s strategies for using social media?

Sociologist William Corsaro introduces the term "interpretive reproduction" as a way of challenging our tendency as adults to think about children in terms of "socialization," or in terms of what they will become in the future rather than in relation to what they are doing presently. The term "interpretive reproduction" describes the process that young people go through as they interpret and then innovate as participants in society. They're not just internalizing and absorbing culture; they're actively contributing to how it is changing, even as they're doing so in relation to existing social processes. I used this term as I was trying to sort out what was "new" about the context of digital and mobile media in teen identity work, and what was pretty consistent with the way teens had been engaging in identity work for a long time.

I think the term helps to remind parents that parenting is a process that involves not only parental intentions but also the creativities of young people as they respond to their environments. As parents it's easy to feel nervous about the fact that we can't control a lot of what happens in new media environments. I think it's helpful for parents to look for patterns that relate to what came before, so that we can see that young people are using these new media to address needs that have remained remarkably similar from their generation to ours. At the same time, for sociologists interested in the role of media in social change, it's important to see that the innovations of young people do matter. They are contributors to culture, which is why it's important to look at their practices not just in relation to parental intentions but also in relation to how the collective uses of technologies among all generations are changing our social lives.

 Lynn Schofield Clark is Associate Professor, Director of the Estlow International Center for Journalism and New Media, and Interim Chair of the Media, Film, and Journalism Studies department at the University of Denver.  In addition to co-parenting two teens, she is author of The Parent App: Understanding Families in a Digital Age (Oxford U Press, 2012), From Angels to Aliens: Teenagers, the Media, and the Supernatural (Oxford U Press, 2005), and co-author with Stewart Hoover, Diane Alters, Joe Champ, and Lee Hood of Media, Home, and Family (Routledge, 2004).  She teaches qualitative research methods and journalism courses, and is currently involved in a community engaged youth participatory action study of news and story-sharing among high school aged recent immigrants to the U.S..

Raising Children in the Digital Age: An Interview with Lynn Schofield Clark (Part One)

A few posts back, I shared with you my interview with art historian Amy F. Ogata, author of Designing the Creative Child: Playthings and Places in Midcentury America. Ogata was nice enough to discuss with me her thoughts on the ways contemporary ideas about the digital child might have been informed by the thinking of the postwar era. Today, I want to push us to think even further about the nature of childhood and parenting in the digital age. My interviewee is Lynn Scofield Clark, author of the 2013 book, The Parent App: Understanding Families in the Digital Age. The Parent App builds upon a rich tradition of work on the intersection of media and the family, going back to early work in this space by writers such as James Lull, Roger Silverstone, and Ellen Seiter, as well as more recent work by scholars such as Sonia Livingstone in the UK or the Digital Youth Project in the United States. Clark is clearly familiar with this literature,  but she also pushes well beyond it -- not simply because of her central focus on digital and mobile technologies, but also because she is so attentive to the shifting conditions -- economic, social, technological -- which impact the lives of American families today. There is an admirable balance here between the broad view -- an account of significant shifts in the relations between work and family -- and a more focused attention to the specific narratives of the individual families she describes.

She has a particularly nuanced concern for notions of class, as they operate on much more ambiguous terms in Amercan culture than in the British tradition that informs her work. She helps us to understand how the choices which parents make about their children's access and use of new media technologies are strongly shaped by class -- in the literal sense, in terms of access to technologies, time, space, and cultural capital  and in the more figurative sense, in terms of very different ideologies of parenting that determine what value families attach to different kinds of activities within and beyond the home.

She is a gifted ethnographic storyteller: each segment offers a vivid portrait of the people involved, the choices they are making, the impact of the those choices on their lives, and the contexts within which these choices get made.  She does an admirable job here at moving between descriptive and normative agendas, being clear about her own stakes as a mother in researching and understanding how decisions get made about media in the context of family lives.  She makes it clear that some of the choices parents make clash with her own norms and expectations as a mother, but she looks at each of her subjects with sufficient sympathy and empathy that she can explain why these choices make sense to them, and she also observes that stricter regulation does not always result in estrangement between parents and children.

All told, this is important work, especially at a time when a growing number of scholars in the Digital Media and Learning field are seeking to understand the learning ecology -- the ways that informal and participatory learning opportunities outside of school may become part of a "connected learning" system that supports children's educational growth. She clearly understands the stakes behind this work, but she also brings a healthy dose of realism to the conversation, noting that even middle class parents who may buy into the ideology of participatory learning often do not devote much time to enhancing or contributing to these kinds of opportunities for their off-spring. She also offers us some insights into why lower income families suffer from diminished opportunities -- not simply because of constraints on resources, but also due to hostility from others in their immediate environment towards certain goals or norms  they might associate with social striving and upward mobility. Clark finds that even professional, college education, upper middle class parents often lack the skills and knowledge to meaningfully mentor their sons and daughters about their online lives; she finds that even in close families youth often involve themselves in activities behind their parent's backs, circumventing rules designed to protect them from exposure to risks. She suggests that parents still look upon their relationship to new media primarily in terms of regulating exposure, limiting time, and managing risks, much more than creating and sustaining opportunities.

What do you mean here by “parent app?” How does the title speak to parents' expectations about the ways that digital and mobile media devices are impacting their relationship with their children?

I used the phrase "the parent app" in a tongue-in-cheek way, as in, "wouldn't it be wonderful if there were an app that could provide parents with an answer to every possible dilemma that emerges in relation to parenting and technology?!"

The title also plays with the film title, "The Parent Trap," in that I found that parents do often feel trapped, or at least overwhelmed, by the fact that they think that their children are growing up in a digital culture that they may not fully understand and to which they think they have limited access. This parental anxiety drives us as parents to want some neat-and-tidy way to address technologies in family life. So, I used the title to signal that mine *wasn't* going to be a straightforward "advice" book, because I really believe that every parenting situation is unique and therefore I think it would be impossible to create such a book, let alone an app, that would address what is a constantly changing situation.

What I wanted to create was a book that was more like the kinds of conversations I participate in with parents and, less officially, with research friends, when we share stories and try to make sense of what they mean for our unique situations and dilemmas. So, the book itself is very story-driven in terms of its approach. My hope is that the stories help parents consider their own situations and to then build our own "apps."

Throughout the book, you are attentive not only to what teens and adults say about their relations to and through these media, but also the contexts in which your interviews were conducted. In what ways did both teens and parents use the interview process to deliver messages to other family members?

We all live in such busy times that in U.S. families, it's pretty easy to focus on the immediacies in our conversations with one another. The interviews for this book gave parents and young people a chance to sit together and discuss something important, and that in itself often made for a positive experience. The interview experience allowed parents to reinforce the message of how important it is to value the time we can spend listening to one another. Of course, this means that the parents who feel "too busy" to talk with their children didn't participate in the interviews, and I believe that this skewed the sample somewhat. But it also gave the study a chance to explore what happens when those families that do prioritize being together actually focus attention on the sometimes-contentious issues that arise in relation to digital and mobile media.

Risk is a central theme running through the book. How do parents and youth understand the “risks” of networked communications in different ways? Why are we as a culture so often pre-occupied by these risks and so often disinterested in the potential value of teens online lives?

In the U.S. we live in a culture of fear, as sociologist Barry Glassner has argued. In my book I discuss the role that the news media have played in relation to appealing to this fear, which in turn contributes to our sense of risk. TV news in particular highlights unusual yet poignant occurrences that their viewers will find troubling - they have to do this because they need to appeal to the lucrative audience of young parents in the 25 - 40 age category in order to stay on the air. So stories about children and Internet-related concerns, while important, receive attention that tends to magnify the sense of risk in a manner that's disproportionate to the actual risk.

I found that even though parents and teens voiced many of the same fears about potential risks that you see in the news, young people in their teens and preteens tended to recognize and know how to avoid the most-publicized risks, such as predators and encounters with strangers. The preteens and teens in my study were concerned about risks that they related to identity: what you might call dissing, drama, and disregard (or being ignored). This is consistent with a lot of research that's been done by Pew Internet & Microsoft's danah boyd (who spoke of "drama" as a word teens prefer to describe what adults might call cyberbullying).

I think you're right, Henry, that many parents are pre-occupied with potential risks and less interested than they might be in the value of their teens' online experiences. Parents tend to see safety as their first order of business, so I guess that orientation isn't surprising. Yet as digital and mobile media become more integrated into family life, parents are coming to see the benefits of such media, particularly in relation to parental goals of enhancing family connection in a time that's characterized by our sense that we're busier than ever.

Lynn Schofield Clark is Associate Professor, Director of the Estlow International Center for Journalism and New Media, and Interim Chair of the Media, Film, and Journalism Studies department at the University of Denver.  In addition to co-parenting two teens, she is author of The Parent App: Understanding Families in a Digital Age (Oxford U Press, 2012), From Angels to Aliens: Teenagers, the Media, and the Supernatural (Oxford U Press, 2005), and co-author with Stewart Hoover, Diane Alters, Joe Champ, and Lee Hood of Media, Home, and Family (Routledge, 2004).  She teaches qualitative research methods and journalism courses, and is currently involved in a community engaged youth participatory action study of news and story-sharing among high school aged recent immigrants to the U.S..

The "Creative Child" Meets The "Digital Native": An Interview with Amy F. Ogata (Part Two)

You write extensively in the book about the design of playrooms, suggesting that there is a shift in terms of children’s access to physical space within the home during this period. What factors led to the shift and what were the prevailing ideas about the design of play spaces for children? 4538061203_6093781c4a_z

Yes, I spent a lot of time thinking not only about playrooms and playhouses of the domestic sphere, but also public schools and museums. In the single-family dwelling, the shift I am trying to trace is the growing belief that children, whose numbers exploded in the U.S. after World War II, needed their own spaces and that these were not just utilitarian leftover spaces but rather specially designed to promote their imaginations. In architect-designed houses, there were often playrooms on the plans. Even in builder houses, there were special places indicated for children's activities. One of the main ideas was that children should have "correctly" outfitted spaces. The American Toy Institute commissioned a series of model playrooms to house numerous toys and make playing indoors attractive. Others, such as the anthropologist Margaret Mead argued that children should be left alone in their bedrooms to think and develop their own ideas. Isolation is one of the themes but proximity to the rest of the family, especially the mother, is also written into some of these houses. And the making of a "creative" home environment was stated in magazines and guidebooks as an expectation of postwar parents.

As you note, there was a dramatic increase in the number of children’s museums across this period, as well as a changing philosophy about what forms of creative engagement such museums should support. What has been the lasting impact of these ideas on current museum practices?

The form children's museums take today is, in part, a result of the enduring notion that the sensory encounter of objects will enhance learning and stimulate new thoughts. Children's museums as a type were not new, but they did increase very quickly during the Baby Boom. And while early museums emphasized nature study, their postwar versions were more likely to ask the child to experience something, whether it was being under a city street or climbing through a giant molecule. In the case of the Exploratorium, which was never specifically a children's museum but engaged lots of children, visitors were encouraged to experiment with perception. Several museums I discuss look very different today--the Exploratorium, for example, has just moved to a new facility--they now attract a much younger child than museums in the 60s and 70s, and many of the exhibits are less open-ended or they go straight for entertainment, emphasizing dramatic play over, say, studying waves in a ripple tank. I think the most long-lasting aspect is the general belief that children should be active in the museum space.

It seems to me that some contemporary efforts to develop alternative kinds of spaces for children and youth still owe a great deal to the design approaches of this era. I was hoping I might get you to comment on what someone from the 1960s would recognize or find strange about two contemporary educational spaces for children? The first is the YouMedia Center at the Chicago Public Library

Sounds like a great space and in some ways it resembles the kinds of open school ideas of the late 60s and 70s. In that age, the push for large open spaces and team teaching was promoted as an answer to a teacher shortage, and to enable use of "teaching machines" and media (in that day it was film, television and sound recording), and a way of engaging children in hands-on projects, like producing TV shows for their schools. While architects thought that the spaces they created would ensure that teachers and students behaved in certain ways--smaller classrooms would encourage small group instruction, larger spaces might promote collaborative projects, moveable furniture would lead to flexible spaces--however, that didn't necessarily happen. YouMedia is obviously not a space where core subjects are taught on a daily basis, but instead is an auxiliary space for exploration after school, perhaps more like the Exploratorium or the Brooklyn Children's Museum as it was a long time ago. There, children and teens could operate machines, mix soils in a greenhouse, graffiti a concrete wall, or retreat to read in a library housed in a leftover gas tank.

The second is the Los Feliz Charter School for the Arts. Again, what commonalities and differences do you see between the ideal creative spaces of the 1960s and this school? images

This is another great example of the ways that progressive educational ideas are resurgent, however, this is a charter school with access to the kind of private funding that is not available to regular public schools that depend on tax revenue. The schools I discuss were all publically funded (some were in extremely wealthy neighborhoods and others in poor rural areas) and aimed to accomplish some (but certainly not all) of these same learning objectives. Many of them were small and have been changed over the years. It seems that the Los Feliz school has tried to use space to encourage curricular outcomes. Like some schools in the postwar era they have given over far more teaching space to projects like art, music and drama. Increasingly these are the subjects that are getting squeezed out of the public school day by constant budget cuts, emphasis on standardized testing, and in places like New York City, by demands on limited space. The sentiment that one teacher in this video conveys--that they are not trying to turn out artists but rather confident, well-balanced people--echoes exactly the discourse on creativity in the postwar years. The notion that creativity is a lifelong benefit that will eventually help children become competitive in the workplace has also found its way to college campuses. I don't mean to sound skeptical of creativity itself (I am an art historian!), but I think that the schemes we adopt to instrumentalize it reveal that we lionize creativity as a cultural myth at moments when we feel insecure.

Amy F. Ogata is associate professor at the Bard Graduate Center: Decorative Arts, Design History, Material Culture in New York City. She is the author of Art Nouveau and the Social Vision of Living. Her new book, Designing the Creative Child: Playthings and Places in Midcentury America was recently published by the University of Minnesota Press.

The "Creative Child" Meets the "Digital Native": An Interview with Amy Ogata (Part One)

  51LVprpXZ3L

The Post-War American family turns out to have been a much more complex phenomenon than our stereotypical images of Leave It To Beaver might suggest. The Baby Boom generation, invested in critiquing the values of their parents, left us with an image of the era which is highly conservative, ideologically repressive, emotionally sterile, and materialistic -- there's some truth to these cliches, of course, but there was much more going on. In particular, there was an attempt, coming out of the Second World War, to embrace a conscious project of designing and developing a new generation which would be free of the prejudices of the old, which would be capable of confronting global problems and making intelligent decisions about the Bomb, which would be democratic to its core and thus resistant to future Hitlers, and above all, which would be free of inhibitions which might block their most creative and expressive instincts.

I've long been fascinated by this period but rarely have I seen it written about with the depth and insights that Amy F. Ogata brings to her new book, Designing the Creative Child: Playthings and Places in Midcentury America. Ogata brings a design/art history perspective to bear on the period, telling us more about the ways that ideas about children as expressive beings helped to inform the design of toys, playspaces, schools, libraries, museums, and other public institutions, and beyond that, she offers some glimpses in how these ideas about creativity helped to shape children's books, television, and other popular culture texts. I came to the book for the insights that it might give us into the children's media of the 1950s and 1960s, but I left with a much more immediate sense of how a deeper understanding of how ideas about childhood during that period might speak to our present concerns. As I wrote as a blurb for the book:

At a time when the news media is again concerned about a crisis in American creativity, schools are cutting funding for arts education, major foundations are modeling ways that students and teachers might 'play' with new media, and museums worry about declining youth attendance, Designing the Creative Child makes an important intervention, reminding us that these debates build on a much longer history of efforts to support and enhance the creative development of American youth. I admire this fascinating, multidisciplinary account, which couples close attention to the design of everyday cultural materials with an awareness of the debates in educational theory, public policy, children's literature, and abstract art that informed them.

So, the following interview is designed to explore those points of intersection between the "creative child" as imagined in the post-war period and the "digital native" as conceived in the early 21st century. As a careful historian, Ogata was careful to make some nuanced distinctions between the two, yet she was open to exploring the ways that these older concepts about childhood might still be informing some of our current discussions about digital media and learning.

You open the book with a quote from Arnold Gesell who writes that “by nature” the child was “a creative artist of sorts....We may well be amazed at his resourcefulness, his extraordinary capacity for original activity, inventions and discovery.” This formulation reminds me of contemporary formulations of children as “digital natives” who "naturally" know how to navigate the online world. What do you see as some cornerstones of this belief in the “creative” child? Is the goal for adults to facilitate and support this creativity or to get out of the way and avoid stiffling it?

This is an interesting analogy and one I had not considered. Gesell is articulating a sense of surprise and admiration, and it resembles how we speak about children navigating digital devices. What the concepts of the "creative child" and the "digital native" share is an essentialist belief that children are somehow "naturally" inclined toward certain expressions or activities, and it is very hard to support these kinds of overwhelming generalities. Moreover, while we might praise the "naive" and untutored, behind these sentiments I also detect both a patronizing quality and a sense of loss or regret on the part of the adult. The idea of the creative child is one invented by adults and, as I argue, it serves many different interests, from toy manufacturers to art museums, Cold War ideologues to serious scientists.

The cornerstone of the idea of the creative child is that he or she possesses "natural" insight that comes out in play. Another related belief is that childhood creativity is a fleeting quality that has the potential to provide future gains for the child, her parents, and the nation. Because the idea of nurturing creativity in children was so widespread (and such a big business) after World War II, we tend to understand children's creativity in limited, usually positive terms and we expect it to take certain forms. This, perhaps, is where the creative child and digital native part ways, given the lingering popular suspicion around children and the digital environment (the belief that kids might get themselves or others in trouble). In the historical case I outline, it is a parent's responsibility to facilitate a child's creativity by providing toys, amusements, and spaces for play. But the public was also invested in some of these notions, evident in new public schools, spaces for exploration such as museums, and in art education programs.

What connection existed between the ideal of the creative, expressive child and the growing consumer culture of the post-war period? What kinds of products were able to attach themselves to this particular construction of childhood?

The consumer dimension was a powerful one and has become even more so today. It's hard to escape the rhetoric of creativity if you're shopping for toys or games, or other things like clothing and schools. The child's block, the cardboard box, and crayons were some of the most romanticized and widely prescribed amusements of the postwar age. In addition there were some objects, created by architects and designers, which were deliberately arty and were sold specifically as creativity toys.

magnet_master_01

Magnet Master was a magnetic building toy designed by Arthur Carrara and developed as a product of the Walker Art Center. There were no instructions or diagrams because, the museum reasoned, children didn't need them and would do better on their own. The Philadelphia architect Anne Tyng developed a building toy she attempted to market under the idea of stimulating children to build and explore. Charles and Ray Eames's 1950s paper toys were similar but used different materials and were more widely available and for a longer time. But other products, once so ubiquitous, have now completely disappeared. The simple indoor fabric playhouse that draped over a card table is gone, in part because people no longer have those standard-sized card tables.

To what degree was the ideal of the creative child bound up with particular experiences of class, race, and gender? This is, was the expressive child more likely to be middle class, white and male, or did these writers offer a more multicultural understanding of what constituted creativity?

CPlaythings1The figure of the creative child in this historical era is extremely middle class, but not exclusively male and not exclusively white. In the early 1950s, white children are implied in the toy ads and housing schemes, by the early 60s, this is still dominant but less so. Creative Playthings placed ads in Ebony, for example, and the Brooklyn Children's Museum's 1970 renovation was very much designed with the local Crown Heights neighborhood in mind. The creative child is a construction that aims to overlook difference while simultaneously selling exclusivity. This is one of the paradoxes of the idea. Creativity is described as something that all children are supposed to possess "naturally," but at the same time parents and teachers are told that it needs careful tending and stimulation, usually through specific kinds of toys and materials.

What role did television play in promoting and supporting this concept of childhood creativity?

 

 

Television was of course a central force for the representation of childhood in postwar America and had a role to play in helping to create the specific figure of the creative child. I spend most of my book describing material and spatial forms that do this work, but there are several programs that also had an important role in the making of the idea. Winky Dink, which asked the child to "finish" the story by drawing on a special screen affixed to the TV itself, is an obvious example for harnessing the child's agency, but the character who, I think, best represents the image of the postwar creative child is Gumby.

Gumby's energy and imagination are represented in the many physical forms he takes, and the way he and his sidekick Pokey move in and out of stories, eras, and places. His exuberant inquisitiveness sometimes brings havoc upon himself and his family, but this is of course resolved before the end of the program. The way creativity is constructed on television and in children's books emphasizes the positive and tends toward happy endings.

Often, across the book, it seems that children’s imaginations are linked to various forms of abstraction. What was the relationship between childhood and the modern art world during this period?

You are right about this. Abstraction is one of the recurring motifs of the designed objects and spaces I discuss. Frank Caplan, who was one of the founders of Creative Playthings, believed that undefined shapes and unpainted forms would help to stimulate a child's imagination. The company sought out artists to design toys and playgrounds to enhance their business and for cognitive developmental reasons, but also because they were genuinely interested in the links between modern art and design and objects for children; they collaborated several times with the Museum of Modern Art. This occurred at a time when abstract painting and sculpture was gaining prestige in both the U.S. and Europe, and had a propagandistic role in the Cold War. However, the twinning of abstraction and a child's imagination (evident in forms like children's drawings) is an older idea. Early twentieth-century European modernists deeply admired the representational strategies of children's art. This notion comes back with new vigor in the "Creative Art" education curriculum that asked pupils to express their experiences rather than copy models. There was, then, a demand placed on children to be creative, and often abstract.

 Amy F. Ogata is associate professor at the Bard Graduate Center: Decorative Arts, Design History, Material Culture in New York City. She is the author of Art Nouveau and the Social Vision of Living. Her new book, Designing the Creative Child: Playthings and Places in Midcentury America was recently published by the University of Minnesota Press.

Bastard Culture!: An Interview with Mirko Tobias Schäfer (Part Two)

Your more recent work on Twitter has deployed the concept of a gift economy, building off some of the ideas in our original white paper on Spreadible Media. How are you defining gift economy? Why is this appropriate for talking about digital media? How do contemporary forms of gift economies in the context of capitalism differ from more classical understandings of this context?

It is less the 'economy' in gift economy than the 'gift' that interests us. The gift as a 'public recognition'. And this initial public recognition with the intention of more exchanges in the future, is a key aspect in gift economies as Boris Malinowski has pointed out. Together with my colleagues Johannes Paßmann and Thomas Boeschoten we looked into Twitter data to retrieve patterns of communication (The Gift of the Gab. Retweet Cartels and Gift Economies on Twitter). When investigating two samples, the MP's of Dutch parliament and the German top Twitter accounts, we noticed clusters of users who were retweeting each other frequently, so called retweet-cartells, similar to citation-cartels in academia. We argue that the retweet equals a 'public recognition' and it can serve as an 'opening gift' with the intention to receive retweets in return.

What does the notion of the gift economy help us to see when we look at patterns in how content travels through Twitter?

We explicitly refer to your recent work on spreadable media where you employ the notion of gift economy to explain spreadability. We agree with you that this concept provides more plausible explanations for the distribution of online content than the notion of 'viral distribution'. The retweet, the repin, the favourite are intrinsically related to attention. However, they are 'cheap' gifts as they are abundant. But such a gift can gain more value through the status of the user retweeting a less popular account and hence drawing attention to it. Therefore it is unsurprising that we find politicians mostly retweeting their own party members. Members of the Favstar scene frequently retweet accounts that are equally popular. They form a retweet cartell, very similar to academic citation-cartels. However, when we look at the @replies within our sample of Dutch MP's we can see that they do not limit their communication to their own party members but with colleagues from all parties. Therefore, we conclude that if attention is drawn to messages through retweeting, users become selective in whom to award the 'gift' of a retweet.

I do not know how Paßmann and Boeschoten feel about it, but I would not necessarily stick to the strict economic understanding of the 'gift economy'. I think it will prove even more useful to adapt the term. It is most likely a feature of stimulating communication and connection. With communication, I mean ephemeral communication, not conversations. The 'gift' is important to fuel initial contact making. Features as the retweet, the favourite, the repin, the +1 etc. are the grease of initial social interaction on large platforms. They facilitate low threshold communication; communication is the wrong word, and even contact is not covering it. It is something between a mere ping, recognition and contact. But it is crucial to enable interaction of users and spreadability of content in social media.

Your research is interesting for the ways that it combines large-scale/quantiative “sentiment analysis” tools with more qualitative use of cultural theory. Does this reflect different skill sets within the team of researchers? Are there any insights you’d like to share about mixed methods research growing out of this project?

I'm teaching at a media studies department within the Utrecht University humanities faculty, where usually qualitative research methods are paramount. But researching new media where any user activity produces data that can be analyzed stimulates to employ those data for research. These digital methods -as Richard Rogers has dubbed them- are invaluable expansion of our tool set. In the meantime many applications are available and many more are underway. Commercial platforms provide tools, but also the two main pioneering groups in this area, Manovich's Cultural Analytics  and Roger's Digital Methods Initiative  provide handy tools on their websites. For our Utrecht Data School  we teamed up with Buzzcapture  as a technological partner that supports our research actively with tools for data aggregation and social media data analysis. We conduct research concerning specific questions for our partners from public administrations, NGO's and corporations. However, we take the liberty of asking different questions than the partners posed, or approach things from different angle.

I can see that student teams quickly develop a sort of division of labor, where scraping of data, working in spreadsheets, visualizing data and networks are carried out by different members of a team. We try to prevent this as far as possible, because we want all students to be involved in the entire process of the research project from scraping the data, cleaning up the data and preparing them for analysis and visualization to interpretation and contextualization. However, this is not easy, as there are indeed many specific tasks that require specialized knowledge and skills.

This work is inherently interdisciplinary. Software developers, computer scientists, data scientists, statisticians and also data journalists are great to team up with for different research projects. We frequently invite colleagues from very different areas to participate in the Utrecht Data School, either through directly contributing to a project or to teach students.

To the humanities researcher this development is exciting for two reasons: data analysis and visualization produces new insights in the online phenomena we are investigating. But through conducting these tools and methods we learn also about their role in epistemic processes. Our knowledge society increasingly thrives on computed results and automated information processing. The computer generated infographics appear persuasively convincing. It is therefore important to develop literacy that allows us to use the tools but also to be informed about their limitations and their persuasive effect. In view of your concerns about techno-determinism -which I share- I want to emphasize that we deliberately want them to develop critical understanding for the role of information technology in our epistemic processes.

We also want them to experience how unstable, how experimental and exploratory our research activities are. Although we think the results are often compelling, we want to keep up a healthy skepticism and remain open for doing things differently. We are also aware that we are in a data-rich environment, but that unfortunately research can appear analysis-poor. And I think it is necessary for the emerging 'digital humanities' to make this skepticism an inherent part of their use of information technology.

Mirko Tobias Schäfer is assistant professor of new media and digital culture at the University of Utrecht (Netherlands) and research fellow at Vienna University of Applied Arts. He blogs at www.mtschaefer.net.

Bastard Culture!: An Interview with Mirko Tobias Schäfer (Part One)

It says something about the compartmentalization of academic culture that I only belatedly discovered Mirko Tobias Schäfer's Bastard Culture!: How User Participation Transforms Cultural Production (published by Amsterdam University Press in 2011) -- a work which poses some important critiques of the concept of participatory culture, especially as it relates to recent developments around Web 2.0 and social media. Schäfer, based in the Netherlands, represents an important tradition of critical theory about new media which has emerged most emphatically from Europe and which should be better known among those of us working within the United States. As we discuss here, he is especially interested in the ways that technological designs constrain or limit our participation, rendering it less meaningful, commodifying it, in ways that run directly counter to the explicit rhetoric about expanding participation and empowering users. Read closely, Schäfer's work still embraces the value of democratic participation, yet he wants to hold companies, and scholars, to a high standard in terms of what constitutes meaningful forms of participation, and he is eager to push us beyond the first wave of enthusiastic response to these new affordances in order to look more closely and critically about how they are actually used. As my interview here suggests, there are points of disagreement between us, but there is also much common ground to be explored, and there is an urgent need for researchers from different critical and disciplinary perspectives to be working together to refine our understanding of the current media landscape. I had the pleasure of sitting down with Mirko at the recent Media in Transition conference at MIT and look forward to many future exchanges.

Having last week featured an interview with the editors of The Participatory Culture Handbook, I want to continue this focus on new theories of  participation by sharing this recent exchange I had with Schäfer.  I have come away with an even deeper respect and admiration for Schäfer's nuanced critique of digital participation. The first installments of this interview involve looking backward to his Bastard Culture book, exploring the convergences and divergences in our thinking, and reflecting on how the debates around digital media have shifted since 2011. The closing segment shares more recent work Schäfer and his colleagues at Utrecht University have been doing using "big data" processes (in combination with more qualitative approaches) to better understand the kinds of social relations that are taking shape on Twitter.

The title of your book, “Bastard Culture,” is meant to suggest the ways that the worlds of users and producers, consumers and corporations, are “intertwined” or “blended” in the era of Web 2.0. I suspect we would agree that understanding the relations between these terms remains a central challenge in contemporary cultural theory. The goal is, as you suggest, to “provide an analysis that is not blurred by either utopian or cultural pessimistic assumptions.” Are we any closer to developing such an analysis today than we were when you first published Bastard Culture? If so, which contemporary accounts do you think help us to achieve this more balanced perspective?

It was indeed my goal to point out the general heterogeneity of online culture as well as to deconstruct the overly enthusiastic connotation of participation. Especially in academic discourse the unconditional enthusiasm for the so-called social media has cooled down by now. We can see important contributions criticizing social media platforms for their lack of cultural freedom (e.g. strict content monitoring), breach of privacy and their commercial use of user activities and user data.

I like to distinguish this critique in three general approaches, which separately focus on a) free labour, b) privacy issues and c) the public sphere quality of social media.

Drawing from Marxist theory these authors -among others Trebor Scholz, Mark Andrejewich, Christian Fuchs and partially Geert Lovink- criticize social media platforms for generating an unacknowledged surplus value from user activities and for determining effectively the scope of user activities in order to maximize commercial results. Scholz's programmatic publication The Internet as Playground and as Factory is a strong example of this approach.

The strict regulations imposed by platform providers in combination with excessive data aggregation on users and their online activities sparked criticism concerning the lack of privacy by Michael Zimmer, Christian Fuchs and others. The general threat of surveillance -exerted by state authorities- has been convincingly addressed and criticized by Ronald Deibert, Evgeny Morozov, Wendy Chun, Jonathan Zittrain and others.

The public quality of interaction and communication on social media platforms has been described by Stefan Münker as “emerging digital publics”. Framing social media as a public sphere is not highly developed, but it provides in my opinion the most intriguing approach to understanding social media platforms and their impact on society.

Yes, I think we have made some progress in describing media practices more accurately and to give up on media myths that constituted the legend of new media as emancipating users. And this plays even out in the realm of the more general public. In Germany, the Frankfurter Allgemeine Zeitung -a conservative/market-liberal newspaper- calls for a society-wide debate on technology and provides a platform for members of Computer Chaos Club to criticize technocratic policies and short-sighted understanding of technology and media. Evgeny Morozov is also doing an excellent job with his crusade against techno-populism; or think of Jaron Laniers superb critique of imprudent media use and hasty enthusiasm. It is absolutely crucial to have these debates within the popular discourse, as it is the popular discourse that shapes the general understanding of technology. That is why I have tremendous respect for scholars who are able to reach out to general audiences and to translate complex issues in accessible language.

As you note, participation has become an increasingly problematic word that is used by many different people in support of many different and often contradictory claims about the relationship between new media technologies and consumer empowerment. What steps can we make to reclaim participatory culture as a productive category for cultural analysis?

My objective was to deconstruct the ideological connotation as well as the emotional charge of 'participation'. Recently, we can see a similar problem with the metaphor 'social media'. It fuels a misunderstanding of media and media practices and it structurally obscure the agency of technology (the back-end as well as the user interface), power structures and economic factors.

In my opinion, it would be already helpful to pay close attention to the language we use to describe media and media practices. Many scholars can easily identify with emancipation, anti-hegemonic attitude and political activism. However, in our enthusiasm we tend to overestimate certain practices and misrepresent media use. We have therefore to take off our blinkers. I often tell my students, that if you really like your object of research, the chance is high for making mistakes and for neglecting important facts that would distort your picture.

That's funny. I tell my students that when you start from too critical perspective, it will be easy to flatten or simplify the phenomenon you are studying, to not look very deeply for redeeming or contradictory features, and to not take seriously what the activity might mean for those who embrace it.

Of course I agree. Being too critical is just as distorting as being too enthusiastic. What is needed is curious interest and willingness to get to the bottom of things, even if it will change your previous view of them. And research methods provide useful ways to do so.

'Participatory culture' can serve as productive category for cultural analysis if scholars distance themselves from their personal appreciation of media practices that might be close to their hearts but not necessarily representative for online culture. This would help to recognize the heterogeneity of the phenomenon we call participation as well as the ambiguity of technology. Taking technological aspects thoroughly into account, using 'digital methods' and putting case examples into perspective of the broader picture will help to do so.

The forms of participation which interest me the most are explicit participation -- that is, places where people are making conscious decisions to create media or otherwise communicate with each other about issues of mutual concern. Can you explain what you mean by implicit participation and how it relates to the claims being made by Web 2.0 companies to support participation? In what sense is it meaningful to describe “implicit participation” as participation? What are we participating within?

With implicit participation I describe how platform providers have integrated user activities into easy to use interface design and eventually implemented into business models. Implicit participation describes how user activities are channeled through the platform provider's design decisions. This ranges from interface elements as the like-button, the incentive of views on Flickr or YouTube to strategies where user unknowingly participate in additional functions of the feature they are using on a platform. The reCAPTCHA is an example of implicit participation where information provided by users for accessing a web feature is re-used in a completely different context. Many so-called gamification practices are examples of implicit participation.

I would argue that the popular 'social media' platforms thrive on implicit participation. It reduces consequently their dependence on intrinsic motivation, which is so crucial in explicit participation. Explicit participation becomes merely optional. The key is to lower the threshold and encourage the generic production of content, through creating data by simply using the platform's features or by spreading or multiplying content through the easy-to-use features of reproduction: retweet, repin, share etc. or to interact through ephemeral features as the like button. We will see many more and far better forms of implicit participation integrated into web platforms in future.

A key difference between our perspectives is that you place a much greater focus on the ways that technologies enable or constrain participation, where-as I primarily discuss the social and cultural motives which shape how people use technologies. Let’s assume we both believe that both technology and culture have played a role in defining the present moment as one where issues of participation are increasingly central to our understanding of the world. I would argue that there is a difference in understanding technology in terms of affordances and in terms of determinents, given the degree to which technologies are, as you note, subject to various forms of appropriation and redefinition once they have been designed and given that digital media can be re-coded and reprogrammed, even at the grassroots level, by those committed to alternative visions of social change. I worry, though, that ascribing too much power to technology results in models of technological determinism, which make certain outcomes seem inevitable. There has been such a strong tendency in this direction over the past several decades, whether critics worrying that Google has made us stupid, or advocates talking about the democratizing effects of the internet. Thoughts?

I am also worried about a simplified view of 'technological effects'. Especially in the popular discourse. there is a plethora of short sighted publications on the potential benefits or downsides of technological development. However, I would not argue that those perspectives inquire the technology but abuse it as a black box that facilitates whatever effect they wish to see unfold. In opposite to scholars, those writers are in the business of selling books, not in the business of conducting research.

I do not think that I am supporting a techno-determinist perspective by investigating technological qualities and by paying attention to the way design affects user activities. The popular 'social media' applications teach us, that we have so far underestimated the role of interface design, back-end politics and API regulation in the cultural production and social interaction playing out on these platforms. I can't possibly neglect that power also comes in shape of technology or as Andrew Feenberg put it: “technology is the key to cultural power”. I am not focused on technology as determining on its own account, but on its agency in close interrelation with designers, users, ownership structures, and media discourses, and others actors.

While my primary emphasis in talking about participatory culture might be described as symbolic appropriation (i.e. the manipulation of narratives, characters, symbols, icons, or brands), the central focus of your analysis is on “hacking” the material dimensions of technology, including, for example, game modifications or free software efforts. We might extend this focus to include a broader array of other material practices -- including Makers and Crafters -- who are central to current discussions of digital culture. What do you see as the consequences of this shift in focus in terms of our understanding of how participation works or what a more participatory culture looks like?

What I really liked about Textual Poachers was that you compellingly showed how open media texts are, not only to interpretation as Fiske had pointed out, but directly to 'material' appropriation and how it contributed to an entire field of cultural production. The world wide web then made the textual poachers explicitly visible, for marketeers and the general public. The second aspect I find important, and unfortunately this aspect is frequently overlooked, is that you outlined the history and the predecessors of today's read-write culture. With the maker culture similar debates concerning 'poaching' will unfold. We will see a new debate on copyrights and corporations will go out of their way to protect their designs from being 'printed'. There will be attempts by providers of 3D printers to control the device and its use. I would assume that the dynamics which I have dubbed confrontation, implementation and integration will play out in relation to the makers culture as well. The recent debates on MakerBot's decision to deviate from the open-source model indicate an attempt of implementation.

As you note, the initial wave of excitement about participatory culture has been met with strong critiques focused on issues of free labor and data mining as forms of exploiting the popular desire for more meaningful participation. Can you describe some of the ways that users have sought to assert their own claims on the technology in the face of their ownership and exploitation by the creative industries?

It's remarkable that dissent with a corporate platform plays out in quite traditional forms of protest and petition. On Facebook users 'like' petitions that represent their claim for better privacy regulations, or they formulate a Social Media Bill of Rights, call for a QuitFacebookDay etc.

There are other examples such as the Social Media Suicide Machine which allows users to delete their profiles. Then there are alternatives to the commercial web platforms and services. Diaspora was heralded as the Facebook killer and is now depict as a barrel burst. The UnlikeUs conference has been established as a platform for critics of 'social media monopolies' to connect and to discuss alternatives. But we can also see that civil right groups and privacy advocates lobby on behalf of users. However, I am afraid that the majority of the users can't be bothered with these issues.

You conclude the book with this important statement: “We must not sit on our hands while cultural resources are exploited and chances for enhancing education and civil liberties are at stake.” This seems like a powerful statement of what’s at stake in debates about participatory culture. So, what forms of action do you think we can or should take as scholars and as public intellectuals to respond to this situation?

The easy to use interfaces of the social web stimulated a new large group of users to use the world wide web. It also put the web again on the agenda of policy makers to regulate, to control and to monitor user activities. Designed as advertiser-friendly platforms, social media inherently provide the possibility for user assessment and control through API's which are already routinely used by law enforcement. We can also see how the powerful companies as among others Apple, Facebook, Google and Amazon affect cultural freedom on the web. Facebook's prudery appears (especially to us Europeans) as astonishingly weird and hostile to culture and freedom of expression. However, since social media platforms have emerged as an expanded public sphere, the censorship of items that might distort the rosy world-view of advertisers and the naivete of uninformed users is appalling. I would not mind if those platforms were a shopping mall somewhere in the margins of the world wide web, but they increasingly become a center part of the web and therefore an important role in our public sphere.

Unsurprisingly, Facebook is the poster boy for policy makers when thinking about eGovernance or other fancily dubbed forms of harmless civic participation. Facebook promises a dangerously safe way of dealing with citizens as their implicit participation features render participation into an easy-to-handle commodity that provides participation as a mere lip-service. Something, that even in democratic societies is still very appealing to policy makers.

What we need, is a society-wide debate on technology and its role in society. We need to discuss to what extent we accept platforms to distort the view upon reality by creating an controversy-free and advertiser-friendly filter bubble.

Mirko Tobias Schäfer is assistant professor of new media and digital culture at the University of Utrecht (Netherlands) and research fellow at Vienna University of Applied Arts. He blogs at www.mtschaefer.net.

What Do We Know About Participatory Cultures: An Interview with Aaron Delwiche and Jennifer Jacobs Henderson (Part Three)

As your book illustrates, participatory culture is a global phenomenon, but so far, most of the research has focused on participatory culture in the English speaking world, and mostly, in the United States. What might we learn about participatory culture if we expanded our investigation to consider, for example, the Global South?

At one time, we had an excuse for such oversights.  We researched where we lived because it was physically and financially prohibitive to do otherwise.  This is no longer the case. There is no doubt that some of the most interesting participatory cultures are situated far beyond North America and it is time we all start looking closely at those cultures.

We are also optimistic that this imbalance will begin to be righted during the coming decade as youth across the globe synthesize social awareness, fluency in multiple languages, and expertise in communication technologies.  We predict (or at least hope for) a flood of research efforts on participatory cultures in the next ten years.

Addressing the geographical research gap is essential if we are to better understand and act upon the potential power of participatory cultures.  Since the emergence of fan studies in the 1980s, we (academic researchers) have built a robust body of literature on participatory fan cultures.  The same can be said for research on participatory democracy and budgeting as well as online gaming cultures.  There are enormous gaps in the literature, though, as far as other participatory cultures are concerned.

This is one reason that we chose to expand the boundaries of our book beyond the field of communication and invited authors who could speak to fields and cultures with lengthy and diverse research agendas – for example, poetry and literature, science, social action.  If we are lucky enough to publish a second collection, currently under-researched geographic locations and topical areas will be a primary focus.

What do you see as some of the major hurdles before we are going to be able to achieve a more participatory culture? What are the most important battles right now in terms of defining the terms of our participation?

As with other institutionalized problems, we must change the perceived value of participation.  This shift must occur in everything from education to economic structures.  For example, students are told they have violated the Honor Code if they work with others to find solutions to a homework assignment.  Team members are rarely rewarded equally for workplace outcomes (team "leaders" always get paid more).  Diplomacy is seen as less valuable than conquering.  We don't expect participation to gain value overnight.  Power is diminished or at least transformed when it is divided, and we all know there are many people who would like to hold on to their power.

Altering the perception of participation is particularly challenging in cultures that value individualism over collectivism.  We do believe this perception is shifting, if only slightly.  In recent years have we begun to hear public figures talk about the possibility of making money and doing good, of elected officials articulating a basic standard of health and opportunity, and of parents questioning the value of memorization rather than participation in their children's education.

How might we increase the value given to diversity and dissent within participatory cultures? Is there a danger that such communities tend to be consensus-based and thus are more apt to exclude people who persistently disagree with shared goals and values?

We do not value diversity and dissent as much as we can and should in participatory cultures. Many people do not see online spaces as open and inviting.  In fact, "incivility" and "nastiness" are the concerns most often voiced in opposition to participatory engagement.  Honestly, it's hard to convince people otherwise when the "comments" sections of spaces such as YouTube and CNN are filled with illogical, unsupportive, and hateful commentary.

Consensus is hard to come by these days; in fact, it is much harder than in years past. This is both a good thing and bad thing. Our touch points of shared experience (mediated and otherwise) are far less than even one generation ago.  Reading and relying only on opinions with which we agree has become commonplace.   Combine this echo-chamber reality with online anonymity and you face an impressive foe.

So, on one side we have an age of disagreement mingling with anonymity and on the other we have cultures that derive success from consensus.  Diversity and dissent can get lost on either side.  Only a culture that can instill the value of listening survives this war.  And we all know that listening is tough, especially when people feel they have something important (or more insightful) to say.

This delicate balance of agreement is what sustains hope in some participatory cultures and destroys others. The strongest participatory cultures are ones in which all voices carry the same weight, all opinions are heard, and all ideas are deliberated.  The weakest participatory cultures are those that allow the crush of consensus or the minority voice to dominate.  Participatory cultures are difficult to build and maintain but, when they work, they are extremely powerful forces in the lives of their participants and across society at large.

 

The book closes with an ethical framework for thinking about participatory culture. What do you see as the core values which might govern an ethics of participation? What mechanisms might exist for inspiring greater ethical reflection within existing and emerging participatory cultures?

 

Almost all ethical frameworks are grounded in the concept of selflessness.  Almost all activities in online participatory cultures are inherently self-centered.  We read. We search. We post. We share.  Most often we do these things for us, not for any greater good.  It might not be easy to flip the switch from selfishness to selflessness in these spaces, but we do see stronger communities where the balance has tipped.

We could begin a movement toward selflessness by gently nudging participants in online communities to consider others in their visual and rhetorical choices.  The ethics chapter of the Handbook calls on people to start standing up for each other in online communities – to take on flamers and to support those who are ridiculed.  Encouraging constructive responses would also help with this move from selfishness to selflessness.  We see this work well on fan fiction sites where member read, help edit, and provide encouragement to fellow writers.

Quite honestly, ethical reflection occurs infrequently.  Most ethicists would claim you need at least five steps to make a good decision: identification of the ethical problem, acknowledgment of the parties involved and your loyalties to each, conscious deliberation, purposeful action, and reflection.  The current ethical decision-making process is most often reduced to just two steps: act and justify those actions. We could make participatory cultures more ethical if we could convince people to engage in even the briefest contemplation prior to posting, uploading, or commenting.  This is something few people do and more should.

Critical studies writers, including the Janissary Collective, featured in the collection, express concern that participation is illusionary and coercive, that we only participate on the terms which powerful groups allow us. What might those of us advocating for a more participatory culture learn from those critiques? 

If one believes that human history provides examples of ever-greater participation, and if one accepts that there are more opportunities for political, economic, and cultural participation than ever before, it is easy to get caught up in idealistic fervor. If we drink too deeply of our own theoretical Kool-Aid, we become irrelevant at best and tyrannical at worst. Critiques such as those authored by Janissary Collective and the British cultural critic Paul Taylor are invaluable because they remind us that things are never that simple.

There are many version of pessimistic critique in cultural studies and critical theory. One variant argues that that democracy is hopeless. According to this view, attempts to foster greater participation and inclusion are the enemy of individual freedom. As expressed by the Janissary Collective, this position holds that "participatory culture can never provide the basis for the good life – in fact, it can be its worst enemy" (p. 264).

A second form of pessimism presents itself as even more negative about participatory culture, but there is a glimmering ember of optimism lurking beneath the surface. This view does not argue that democracy is intrinsically flawed. Rather, it unleashes withering criticism of those thinkers and activists who gloss over the many ways that participatory culture and participatory technologies are abused, exploited, and farcically celebrated by political and economic elites. When Paul Taylor observes "whether interacting in a self-consciously local fashion as consumers of lattes or technologically as hackers of computer systems… we are all perhaps still ultimately passive" (p.255), he implicitly mourns the loss of authentic participatory culture.

Both critiques are essential. The "democracy is hopeless" position reminds us that we must respect the individual right to resist participation. The "participatory culture is a web of false promises" position helps us diagnose where the dream risks becoming a nightmare. Embedded in the passionate prose of Taylor's piece, participatory culture activists can tease out guideposts that will help us determine our next steps.

Aaron Alan Delwiche (Ph.D., University of Washington) is an associate professor in the Department of Communication at Trinity University. His research interests include participatory culture, intergenerational gaming, and wearable computing. In 2009, with support from the Lennox Foundation, he organized the lecture series Reality Hackers: The Next Wave of Media Revolutionaries. In 2010, he delivered a talk titled "We are all programmers now" at TEDx San Antonio. He is also co-editor of the The Participatory Cultures Handbook (2012).

Dr. Jennifer Jacobs Henderson (Ph.D., University of Washington) is an associate professor and chair of the Department of Communication at Trinity University in San Antonio, Texas.  Her research addresses the boundaries of speech in media and participatory cultures as well as the ethics of this speech.  Jennifer is the author of the 2010 book Defending the Good News: The Jehovah’s Witnesses and Their Plan to Expand the First Amendment and co-editor of the The Participatory Cultures Handbook (2012).

 

What Do We Know About Participatory Cultures: An Interview with Aaron Delwiche and Jennifer Jacobs Henderson (Part Two)

As you note, the term, "participatory culture," can be seen as emerging from the cultural studies tradition, but there is also a strong history of writing about "participatory politics." Are these separate conversations? What might these two strands of research have to say to each other?

The participation conversation is a very broad one, and rightly noted, one that has ebbed and flowed across the centuries.  Rather than the concept of participation, it is the dominant focus of the participation that is unique to the time period - political participation, economic participation, social action.  Of course, even when one topic dominated the push for participation, thousands of smaller participatory cultures also thrived around issues such as crafting, gamesmanship, agriculture, and invention.  The communication technologies of this century have simply divided and amplified the topics allowing many more participatory cultures to flourish in unison.

Some have argued that all cultures are by definition participatory. What distinguishes contemporary forms of participatory culture from their predecessors within, say, folk culture?

Participatory cultures are not new.  They are simply the most recent manifestation of human's desire to be a part of something. One of the reasons there is so much attention placed on participatory cultures now is that they are starkly contrasted by the postmodern theories that immediately preceded them.  Postmodern theorists valued resistance, disruption and divergence, while participatory cultures value contribution and collaboration.  Today's participatory cultures are both uniquely new and comfortably traditional venues – like returning to your family home for Thanksgiving to find your bedroom is the new home office.

 

Writing about participatory culture poses a different set of questions than writing about audience resistance, a concept that dominated cultural studies a few decades ago. Resistance to what? Participation in what? What are some of the current models for describing what people "participate" in when they are part of a participatory culture? Is participatory culture necessarily a collective phenomenon or does it make sense to talk about participating as an individual?

The concept of audience resistance played an important role in cultural studies, but the notion of resistance seems almost quaint when one considers the nature of political, economic, and cultural power in the early 21st century. As individual citizens, each one of us is situated within multiple power networks.

In many instances (e.g. the physical borders of the nation-state, the globally dispersed contours of global capitalism), power relationships are imposed upon us at birth. We might be proud to be Americans (or Chinese or Canadians), but our national pride is a lucky accident. The physical coordinates of our birthplace and the citizenship status of our parents determine our initial location in the networks of state power. Financial power networks are also imposed upon us; we are born into capitalism. We might choose to remedy the shortcomings of the economic status quo by building alternative exchange networks (e.g. farmers markets, cooperatives, gift economies, remix culture), but it is almost impossible to completely subtract ourselves from the domination of global capital.

The good news is that we can also situate ourselves in political, economic, and cultural power networks of our own choosing. This is hardly a new phenomenon – Alexis de Tocqueville celebrated free associations in Democracy in America as far back as 1835 – but the emergence of the global Internet and affiliated communication technologies has accelerated our ability to create alternative networks from the ground up at the same time that we work to transform dominant institutions.

Is participation necessarily a collective phenomenon? To the extent that we participate in networks with other human beings, there is always a collective dimension. We engage, we share, we mentor, we feel connected, and we care about what other members of the community think. This is necessarily social.

However, the decision about which networks we select as meaningful outlets for participation is almost always an individual decision. If we truly value participatory culture, we must recognize the right of individuals to choose to not participate.

 

Pedagogical concerns remain central to these discussions, if we are to insure that the widest possible range of people have access to the skills and resources they need to meaningfully participate. What insights might the book offer to educators who want to bring more participatory practices to schools, libraries, and other public institutions?

 

The difficult part about participatory pedagogy is that educators must be willing to relinquish absolute control over the conversation.  For a very long time, especially in Western educational settings, teachers were situated at the top of hierarchical learning models. In educational participatory cultures, learning does not necessarily happen quickly, it is not delivered in a tidy, self-contained package, and it certainly does not conform to government standards.  Learning emerges from the conversational and collaborative journey; it is not located in "the correct answer to the teacher's question." Members of participatory cultures find their own way to solutions, often not by the most direct or conventional paths.

Your book discusses practices such as participatory budgeting which involve the interface between citizens and governments. What has been the track record so far for such initiatives? What are the biggest challenges in opening existing institutions to greater forms of democratic participation?

Neither of us are experts in participatory budgeting, but we were encouraged to see related panels at the SXSW Interactive Conference this year in Austin. For example, one panel focused on participatory budgeting and the use of crowdsourcing to determine how government funds should be spent.  To date, most of the successful initiatives have taken place in Latin America and Europe.  It was heartening to see similar discussions in the United States.

 

Aaron Alan Delwiche (Ph.D., University of Washington) is an associate professor in the Department of Communication at Trinity University. His research interests include participatory culture, intergenerational gaming, and wearable computing. In 2009, with support from the Lennox Foundation, he organized the lecture series Reality Hackers: The Next Wave of Media Revolutionaries. In 2010, he delivered a talk titled "We are all programmers now" at TEDx San Antonio. He is also co-editor of the The Participatory Cultures Handbook (2012).

 

Dr. Jennifer Jacobs Henderson (Ph.D., University of Washington) is an associate professor and chair of the Department of Communication at Trinity University in San Antonio, Texas.  Her research addresses the boundaries of speech in media and participatory cultures as well as the ethics of this speech.  Jennifer is the author of the 2010 book Defending the Good News: The Jehovah’s Witnesses and Their Plan to Expand the First Amendment and co-editor of the The Participatory Cultures Handbook (2012).