Boy and Girl Wonders: An Interview with Mary Borsellino (Part One)

Robin didn't start with Robin. Robin won't end when Robin ends. In fact, it's arguable that Robin's already begun to move on from Robin. In less smartypants language, what I mean is that the ingredients which were brought together to create the character of "Robin," Batman's red-and-green-and-gold-wearing sidekick, were ingredients which already shared numerous common elements. And once Robin could no longer embody these elements, other pop culture arose to take over the character's place.

Or so goes the opening paragraphs of Mary Borsellino's fascinating new work, Girl and Boy Wonders: Robin in Cultural Context. The self-published text, which can be downloaded here, explodes with new insights and information about Batman's oft-neglected and marginalized sidekick, the kinds of information that could only come from a dedicated aca-fan. I will be honest that despite being a life-long Batman fan, I had never given that much consideration to Robin's cultural origins, his contributions to the series, or his influence on our culture. Works like William Uricchio and Roberta Pearson's The Many Lives of the Batman or Will Brooker's Batman Unmasked have made significant contributions to our understanding of the mythology around the dark knight, but most of them given short shrift to his "old chum." Borsellino argues that Robin's marginalization, sometimes in response to homophobia, sometimes in response to a desire for a "more mature" caped crusader, is part of his message. The character has special appeal, she argues, for "those readers and viewers who are themselves marginalized."

I checked in with Borsellino recently, asking her to share some of her insights with my readers.

This project emerged in part from your own very active involvement in Project Girl Wonder, which responded to what you saw as DC's neglect of Stephanie Brown. Can you give us some background on this controversy? What were the issues involved? Why was this character so important to you? What was the outcome of the campaign?

Actually, Project Girl Wonder came about out of the project. I was so immersed in the potential meanings of all the stuff going on with Robin in comics, and so tuned in to the rapid decline of relevance with DC's mandated interpretation of Robin. The idea of Stephanie Brown as Robin was so fresh and strange as a direction, but was handled so clumsily and with such obvious institutionalised sexism that it was pretty vile to witness, both as a cultural observer and as a fan who's also a feminist.

Essentially, for those not familiar with the character or with Robin's larger back story: when the second Robin, a boy named Jason, died, Batman created a memorial out of his costume in the Batcave. Stephanie was the fourth Robin, and her costume was different to the three boys who'd had it before her in that she sewed a red skirt for herself. Just a few months after her first issue as Robin was released, Stephanie was tortured to death with a power drill by a villain, and then died with Batman at her bedside.

The sexualised violence alone was pretty vomitous, but what made it so, so much worse for me was that Batman promptly forgot her. DC's Editor in Chief had the gall to respond to questions of how her death would affect future stories by saying that her loss would continue to impact the stories of the heroes -- how sick is that? Not only is the statement clearly untrue, since the comics were chugging along their merry way with no mention of her or her death, but it was also an example of the ingrained sexism of so much of our culture. Stephanie herself was a hero, and had been a hero for more than a decade's worth of comics, but the Editor's statement made it clear that he only thought of male characters as heroes, and the females as catalysts for those stories. It was a very clear example of the Women in Refrigerators trope, which has been a problem with superhero comics for far, far too long.

Long story short, I got together with a few like-minded comics fans and set out to petition DC Comics into giving Stephanie a memorial like Jason's -- to acknowledge that she was just as much a hero, and just as much Robin, as any of the boys. It made such a clear and striking image: a costume in a memorial case, just like Jason's now-iconic one, but this time with a little

red skirt on it as well. We couldn't have asked for a better logo for our cause.

We were lucky enough to have some invaluable help, both outside comics and inside. Shannon Cochran wrote a wonderful, in-depth article about the situation for Bitch magazine; we were a Yahoo site of the day; the webcomic Shortpacked ran a sharply funny strip about it all; and several comics writers working for DC -- Geoff Johns and Grant Morrison, in particular -- dropped references to the absence/potential presence of a memorial case for Stephanie into comics.

In the end, DC glossed it all over by having a storyline where Stephanie shows up, miraculously alive this whole time, and having the current Robin say to Batman "oh! you always knew she was alive! no wonder you never made her a memorial case!". Despite the fact that stories in the interim had featured Stephanie's death, autopsy, burial, and appearances as a spirit in the afterlife. Nope, Batman knew she was alive the whole time! Good job with the damage control there, DC.

Still, a live heroine's better than a dead one any day, so I count the whole thing as a victory in the end.

Critics have written a fair amount about how Batman's persona was inspired by earlier popular heroes, including Sherlock Holmes and the Douglas Fairbank's version of Zorro. What popular figures helped to inform the initial conception of Robin?

Within comics, the most direct inspiration was Junior, who was Dick Tracy's young offsider. Robin was the first time that boy helper figure was put into a superhero costume, but Junior was playing the detective's assistant role years before, and screwing up in all the same ways Robin so often does, ending up as a hostage and things like that. More widely, you've gone halfway to answering your own question -- Sherlock Holmes had Watson there, to listen to his theories and help solve the mysteries. The sidekick role has been around a long time, and provided the template for Robin's role.

Culturally, the figure of the daredevil boy hero is an ancient one, dating back through epic literature of the middle ages to the statuary and myths of Greece and Rome. Robin just gave the archetype a new costume.

You suggest that the marginalization of Robin as a character has helped to make the sidekick a particularly potent point of reference for other groups who also feel marginalized. Explain.

The two examples I use in my book are queer fans and women, though I also know readers who've used this same framework for class and race. As a queer person, or a woman, or someone of a marginalised socio-economic background, or a non-Caucasian person, it's often necessary to perform a negotiated reading on a text before there's any way to identify with any character within it. Rather than being able to identify an obvious and overt avatar within the text, a viewer in such a position has to use cues and clues to find an equivalent through metaphor a lot of the time.

A recent example of this is Spock and Uhura in the new Star Trek movie. Uhura has always been vitally important as a role model to women of colour -- even Martin Luther King Jr thought so. And she still fulfils that role in the new movie. The narrative themes of racial discrimination and of the conflicts which dual cultural heritage can bring with it are in the movie as well, but they're not the story of Uhura, because Gene Roddenberry was committed to the idea of a future where the crew of a starship could be mixed-race without remark. The character who offers these is Spock: he's the one with all the 'outsider' cues in his makeup, which I think goes part of the way to understanding why the recent Star Trek movie has seen a massive re-emergence of Kirk/Spock slash on the fannish landscape: female fans and those seeking a queer reading are drawn to that sense of marginalisation, of the ongoing fight to be recognised as present and worthy.

I got off-topic a bit there, sorry -- my reason for bringing up Spock and Uhura was to demonstrate that 'otherness' as part of a character's construction isn't necessarily bound directly to traits such as race or gender. It can stand for them, but does so obliquely. And Robin, by being put down and rejected by wave after wave of commentators and creators, has

come to embody anything that's been sidelined or disregarded, anything that's rejected in the relentless quest to make Batman as heteronormatively masculine and dour as possible. Just as those who fight against personal discrimination can find an avatar in Spock, those who struggle to re-establish their voice in dialogues where they've been silenced can find an avatar in the way Robin is pushed out of the way by official texts.

Many know of the ways that DC has struggled with the homophobia surrounding the relationship between Batman and Robin. How has this concern shaped the deployment of Robin over time? Are there any signs that in an era of legalized gay marriage, our culture may be less anxious about these issues?

We also live in an age of Prop 8, alas. I live in Australia, and both Australia and America recently switched from a longstanding conservative leadership to a potentially more progressive government -- but both Prime Minister Rudd and President Obama have gone on-record as saying that they believe marriage should be between a man and a woman. Progress hasn't yet progressed as far as I'd like to see it go, frankly.

And I think DC Comics is an absolute trainwreck mess at this point, to be even more frank. You only have to look at All Star Batman and Robin, by Frank Miller and Jim Lee, to see what a disaster the company's current concept of a flagship book is. The writing's incredibly sloppy, sexist, homophobic, and unengaging. "That is so queer" is used by Robin as a slur.

Batman calls Robin "retarded" and declares himself "the goddamn Batman". It would be hilarious if it wasn't so awful.

It hasn't always been that bad, of course, but right now it appears to me that DC is more anxious than ever about potential gay readings. And then there's Christian Bale, who has stated outright that he'll go on strike if anybody tries to incorporate Robin into the movie franchise. His Batman is so joyless that it's no wonder everybody went starry-eyed for the Joker -- the guy may be a psychopath, but at least he seems to know that running around Gotham City in a stupid outfit is meant to be fun.

You argue that Robin is in many ways a "transgender figure." Explain.

Robin crosses all sorts of imposed gender boundaries, both literal and figurative. Carrie Kelley, for example, the young girl who becomes Robin in Frank Miller's The Dark Knight Returns, is referred to by a news broadcaster as 'the Boy Wonder'; she looks completely androgynous in-costume, and so is assumed to be a boy. Dick Grayson and Tim Drake both assume female identities to go undercover in numerous stories -- Dick even played Bruce's wife on one occasion back in the forties -- and Stephanie Brown's superhero identity before she became a Robin, the Spoiler, is thought to be a boy even by her own father.

Those are just the literal examples of gender transgression. There're also a lot of background cultural cues coming into play, in the way the Robin costume looks, the way different backstories for the Robins are structured, and how sidekicks function in adventure narratives -- all these elements work against the notion of pinning Robin down as definitively male or female as a character; the only classification which really fits is that of being constantly in-motion between options and unclassifiable.

Mary Borsellino is a freelance writer in Melbourne, Australia. She has published essays about subjects such as the shifting portrayals of Batman's childhood family, a feminist critique of the TV show Supernatural, and gender in Neil Gaiman's Sandman comics. She is currently working on a series of YA novels which will begin release later this year and which have been described as 'Twilight for punks'. Mary is the Assistant Editor of the journal Australian Philanthropy.

Calling Young Gamers. Share your AHa! Moment!

My friends, Alex Chisholm and Andrew Blanco from the Learning Games Network asked me if I could use this blog to help them spread the word of some exciting new activities designed to engage young gamers/media makers and to encourage reflection on the value of games for education. Both are causes close to my own heart, as regular readers will know. Here's what Blanco has to say about the initiative: Lights. Camera. Action! Tell us what you think a learning game looks like. Share a story about a connection you made between something you did in a game and something you had to learn in school.

From the Learning Games Network (LGN) comes an interesting inspiration for user-generated content. A recently established 501(c) (3) non-profit organization, established by former MIT CMS Director of Special Projects Alex Chisholm, the MIT Education Arcade's Eric Klopfer and Scot Osterweil, and the University of Wisconsin-Madison's Kurt Squire, LGN was formed to spark innovation in the design and use of video games for learning. In addition to bringing together an integrated network of educators, designers, media producers, and academic researchers who all have a hand in creating and distributing games for learning, they're also bringing forth opportunities for youth to contribute to conversations, research, and development. It's a no brainer for today's students to share their perspectives in a more participatory role as the future of education is shaped.

The first of two efforts is a video contest, notable in its invitation to students to help inform educators and designers with their own thoughts on video games as tools for learning. Requiring entrants to create their own two-to-three minute YouTube videos, the contest offers two themes from which students can choose.

(1) The first challenge asks them to describe an "aha moment" they've personally encountered: "If you've experienced that spark of realization, that moment of epiphany between an idea from a game and something you learned -- at school, at home, or anywhere else -- tell us about it in your video."

(2) The second puts students in the role of teacher or coach, asking them to describe an

idea for a learning game they would employ to help others learn: "What kind of game would it be? What would it help players learn? Why would your video game be a better way to learn something? In your video, tell us what challenges players would face and how they would learn from them."

Contest rules can be found at http://www.aha-moment.org. Students must be 13 years old and above to enter; there are separate categories for middle school, high school, and post-secondary students. Thanks to sponsorship by AMD, the first place prize for each category is a 16-inch HP Pavilion dv6 series notebook, powered by an AMD Turionâ„¢ X2 Ultra Dual-Core Mobile Processor. Deadline for submissions is midnight on July 31, 2009.

A second, longer term initiative is LGN's Design Squad. With game design and production requiring many rounds of iteration during which details are play-tested,tuned, and enhanced, Design Squad members will learn about the development process and the integration of gaming into both formal and informal learning settings, as well as serve as a pool of rapid-reaction testers and reviewers during the creation of learning games by LGN and other organizations that are part of its network. This is a great opportunity for students to play an important role in creating innovative new learning games, enabling them to contribute to design discussions, play testing, production reviews, and early marketing concepts. LGN aims to amplify the voices of today's students among the companies, writers, and designers that are trying to better understand how games are both a powerful media for education and a challenge to develop if one doesn't understand what makes an engaging and rewarding experience.

LGN is looking for highly motivated, creative, and articulate middle school, high school, and undergraduate students to (a) participate in exclusive workshops and online sessions with leading learning game designers, producers, marketers, and researchers;(b) regularly review and test learning games that are in development; and, (c) work both locally and virtually with LGN member organizations across the U.S. Design Squad members in the Boston area will work with the LGN team in its newly established Cambridge studio, a stone's throw from the MIT campus. Interested students between the ages of 13 and 20 can send a note to designsquad at learninggamesnetwork dot org. Or, if you're a teacher or parent who would like to nominate a student, please contact LGN.

LGN plans to review inquiries and send applications to interested or nominated students

through the end of July before announcing the LGN DS 2009-2010 team in time for back-to-school.

Questions about the Learning Games Network can be directed to Andy Blanco, Director of Program and Business Development, andy.blanco at learninggamesnetwork dot org.

Risks, Rights, and Responsibilities in the Digital Age: An Interview with Sonia Livingstone (Part Two)

A real strength of your new book, Children and the Internet: Great Expectations and Challenging Realities, is that it combines ethnographic and statistical, qualitative and quantitative approaches. What does each add to our understanding of the issues? Why are they so seldom brought together in the same analysis?

I'm glad you think this is a strength, as it's demanding to do, which may be why many don't do it. The simple answer is that I am committed to the view that qualitative work helps us understand a phenomenon from the perspective of those engaged in it, while quantitative work helps us understand how common, rare or distributed a phenomenon is.

Personally, I was fortunate to have been trained in both approaches, starting out with a rigorous quantitative training before launching into a mixed methods PhD as a contribution to a highly qualitative field of audience research and cultural studies. While I don't argue that all researchers must do everything, I do hope that the insights of both qualitative and quantitative research can be recognised by all; as a field, it seems to me vital to bring these approaches together, even if across rather than within projects.

You begin the book by noting the very different models of childhood which have emerged from psychological and sociological research. How can we reconcile these two paradigms to develop a better perspective on the relationship of youth to their surrounding society?

I hope that the book takes us further in integrating psychological and sociological approaches, for I try to show how they can be complementary. Particularly, I rebut the somewhat stereotyped view that psychologists only consider individuals, and only consider children in terms of 'ages and stages', by pointing to a growing trend to follow Vygotsky's social and materialist psychology rather than the Piagetian approach, for this has much in common with today's thinking about the social nature of technology.

However, this is something I'll continue to think about. It seems important to me, for instance, that few who study children and the internet really understand processes of age and development, tending still to treat all 'children' as equivalent, more comfortable in distinguishing ways that society approaches children of different ages than in distinguishing different approaches, understandings or abilities among children themselves.

One tension which seems to be emerging in the field of youth and digital learning is between a focus on spectacular case studies which show the potentials of online learning and more mundane examples which show typical patterns of use. Where do you fall?

Like many, I have been inspired and excited by the spectacular case studies. Yet when I interview children, or in my survey, I was far more struck by how many use the internet in a far more mundane manner, underusing its potential hugely, and often unexcited by what it could do. It was this that led me to urge that we see children's literacy in the context of technological affordances and legibilities. But it also shows to me the value of combining and contrasting insights from qualitative and quantitative work. The spectacular cases, of course, point out what could be the future for many children. The mundane realities, however, force the question - whose fault is it that many children don't use the internet in ways that we, or they, consider very exciting or demanding? It also forces the question, what can be done, something I attend to throughout the book, as I'm keen that we don't fall back into a disappointment that blames children themselves.

As you note, there are "competing models" for thinking about what privacy means

in this new information environment. How are young people sorting through these

different models and making choices about their own disclosures of information?

There's been a fair amount of adult dismay at how young people disclose personal, even intimate information online. In the book, I suggest there are several reasons for this. First, adolescence is a time of experimentation with identity and relationships, and not only is the internet admirably well suited to this but the offline environment is increasingly restrictive, with supervising teachers and worried parents constantly looking over their shoulders.

Second, some of this disclosure is inadvertent - despite their pleasure in social networking, for instance, I found teenagers to struggle with the intricacies of privacy settings, partly because they are fearful of getting it wrong and partly because they are clumsily designed and ill-explained, with categories (e.g. top friends, everyone) that don't match the subtlety of youthful friendship categories.

Third, adults are dismayed because they don't share the same sensibilities as young people. I haven't interviewed anyone who doesn't care who knows what about them, but I've interviewed many who think no-one will be interested and so they worry less about what they post, or who take care over what parents or friends can see but are not interested in the responses of perfect strangers.

In other words, young people are operating with some slightly different conceptions of privacy, but certainly they want control over who knows what about them; it's just that they don't wish to hide everything, they can't always figure out how to reveal what to whom, and anyway they wish to experiment and take a few risks.

You reviewed the literature on youth and civic engagement. What did you find? What do you see as the major factors blocking young people from getting more involved in the adult world of politics?

I suggest here that some initiatives are motivated by the challenge of stimulating the alienated, while others assume young people to be already articulate and motivated but lacking structured opportunities to participate. Some aim to enable youth to realise their present rights while others focus instead on preparing them for their future responsibilities.

These diverse motives may result in some confusion in mode of address, target group and, especially, form of participation being encouraged. Children I interview often misinterpret the invitation to engage being held out to them (online and offline) - they can be suspicious of who is inviting them to engage, quickly disappointed that if they do engage, there's often little response or recognition, and they can be concerned that to engage politically may change their image among their peers, for politics is often seen as 'boring' not 'cool'.

In my survey, I found lots of instances where children and young people take the first step - visiting a civic website, signing a petition, showing an interest - but often these lead nowhere, and that seems to be because of the response from adult society. Hence, contrary to the popular discourses that blame young people for their apathy, lack of motivation or interest, I suggest that young people learn early that they are not listened to. Hoping that the internet can enable young people to 'have their say' thus misses the point, for they are not themselves listened to. This is a failure both of effective communication between young people and those who aim to engage them, and a failure of civic or political structures - of the social structures that sustain relations between established power and the polity.

Sonia Livingstone is Professor in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of fourteen books and many academic articles and chapters on media audiences, children and the internet, domestic contexts of media use and media literacy. Recent books include Audiences and Publics (2005), The Handbook of New Media (edited, with Leah Lievrouw, Sage, 2006), Media Consumption and Public Engagement (with Nick Couldry and Tim Markham, Palgrave, 2007) and The International Handbook of Children, Media and Culture (edited, with Kirsten Drotner, Sage, 2008). She was President of the International Communication Association 2007-8.

If you've enjoyed this interview, you can hear Sonia Livingstone live and in person this summer at the 2009 Conference of the National Association for Media Literacy Education

(NAMLE)to be held August 1-4 in Detroit, MI. Her keynote address for this biennial conference -- the nation's largest, oldest and most prestigious gathering of media literacy educators -- is scheduled for Monday, August 3 at 4:00 pm in the Book Cadillac Hotel in downtown Detroit.

The conference - four days of non-stop professional development on topics such as teaching critical thinking, gaming, media production, literacy, social networking and more! -- will feature more than sixty events, including keynotes, workshops, screenings, special interest caucuses and roundtable discussions. Among the special events are the launch of the new online Journal of Media Literacy Education, the Modern Media Makers (M3) production camp for high school students, and a celebration of the 50th

anniversary of Detroit's famous "Motown Sound."

The conference theme, "Bridging Literacies: Critical Connections in a Digital World" speaks to the educational challenges facing teachers, schools and administrators in helping young people prepare for living all their lives in a 21st century culture. Complete details and online registration are available here.

Risks, Rights, and Responsibilities in the Digital Age: An Interview with Sonia Livingstone (Part One)

The first time I saw Sonia Livingstone speak about her research on the online lives of British teens, we were both part of the program of a conference organized by David Buckingham at the University of London. I was impressed enough by her sober, balanced, no-nonsense approach that I immediately wrote a column for Technology Review about her initiative. Here's part of what I had to say:

A highlight of the conference was London School of Economics professor Sonia Livingstone's announcement of the preliminary findings of a major research initiative called UK Children Go Online. This project involved both quantitative and qualitative studies on the place of new media in the lives of some 1,500 British children (ages 9 to 19) and their parents. The study's goal was to provide data that policymakers and parents could draw on to make decisions about the benefits and risks of expanding youth access to new media. Remember that phrase -- benefits and risks.

According to the study, children were neither as powerful nor as powerless as the two competing myths might suggest. As the Myth of the Digital Generation suggests, children and youth were using the Internet effectively as a resource for doing homework, connecting with friends, and seeking out news and entertainment. At the same time, as the Myth of the Columbine Generation might imply, the adults in these kids' lives tended to underestimate the problems their children encountered online, including the percentage who had unwanted access to pornography, had received harassing messages, or had given out personal information....

As the Livingstone report notes in its conclusion: "Some may read this report and consider the glass half full, finding more education and participation and less pornographic or chat room risk than they had feared. Others may read this report and consider the glass half empty, finding fewer benefits and greater incidence of dangers than they would have hoped for." Unfortunately, many more people will encounter media coverage of the research than will read it directly, and its nuanced findings are almost certainly going to be warped beyond recognition.

The last sentence referred to the ways that the British media had reduced her complicated findings to a few data points about how young people might be accessing pornography online behind their parents' backs.

This week, Sonia Livingstone's latest book, Children and the Internet: Great Expectations and Challenging Realities, is being released by Polity. As with the earlier study, it combines quantitative and qualitative perspectives to give us a compelling picture of how the internet is impacting childhood and family life in the United Kingdom. It will be of immediate relevence for all of us doing work on new media literacies and digital learning and beyond, for all of you who are trying to make sense of the challenges and contradictions of parenting in the digital age. As always, what I admire most about Livingstone is her deft balance: she does find a way to speak to both half-full and half-empty types and help them to more fully appreciate the other's perspective.

Given the ways I observed her ideas getting warped by the British media (read the rest of the Technology Review column for the full story), I wanted to do what I could to make sure her ideas reached a broader public in a more direct fashion. (Not that she needs my help, given her own skills as a public intellectual.) She was kind to grant me this interview during which she talks through some of the core ideas from the book.

In the broadest sense, your book urges parents/educators/adult authorities to

help young people to maximize the potentials and avoid the risks involved in moving into the online world. What do you see as the primary benefits and risks here?

My book argues that young people's internet literacy does not yet match the headline image of the intrepid pioneer, but this is not because young people lack imagination or initiative but rather because the institutions that manage their internet access and use are constraining or unsupportive - anxious parents, uncertain teachers, busy politicians, profit-oriented content providers. I've sought to show how young people's enthusiasm, energies and interests are a great starting point for them to maximize the potential the internet could afford them, but they can't do it on their own, for the internet is a resource largely of our - adult - making. And it's full of false promises: it invites learning but is still more skill-and-drill than self-paced or alternative in its approach; it invites civic participation, but political groups still communicate one-way more than two-way, treating the internet more as a broadcast than an interactive medium; and adults celebrate young people's engagement with online information and communication at the same time as seeking to restrict them, worrying about addiction, distraction, and loss of concentration, not to mention the many fears about pornography, race hate and inappropriate sexual contact.

Indeed, in recent years, popular online activities have one by one become fraught with difficulties for young people - chat rooms and social networking sites are closed down because of the risk of paedophiles, music downloading has resulted in legal actions for copyright infringement, educational institutions are increasingly instituting plagiarism procedures, and so forth. So, the internet is not quite as welcoming a place for young people as rhetoric would have one believe. Maybe this can yet be changed!

Risk seems to be a particularly important word for you. How would you define it

and what role does the discussion of risk play in contemporary social theory?

I've been intrigued by the argument from Ulrick Beck, Anthony Giddens and others that late modernity can be characterised as 'the risk society' - meaning that we in wealthy western democracies no longer live dominated by natural hazards, or not only by those. But we also live with risks of our own making, risks that we knowingly create and of which we are reflexively aware. Many of the anxieties held about children online exactly fit this concept.

My book tries to show how society has created an internet that knowingly creates new risks for children, both by exacerbating familiar problems because of its speed, connectivity and anonymity (e.g. bullying) and generating new ones (e.g. rendering peer sharing of music illegal). These are precisely risks that reflect our contemporary social anxieties about children's growing independence (in terms of identity, sexuality, consumption) in contemporary society.

As you note, some want to avoid discussion of "risk" because it may help fuel the climate of "moral panic" that surrounds the adoption of new media into homes and schools. Why do you think it is important for those of us who are more sympathetic to youth's online lives to address risks?

I have worried about this a lot, for it is evident to me that, to avoid moral panics (a valid enterprise), many researchers stay right away from any discussion or research on how the internet is associated not only with interesting opportunities but also with a range of risks, from more explicit or violent pornography than was readily available before, to hostile communication on a wider scale than before, and to intimate exchanges that can go wrong or exploit naïve youth within private spaces invisible to parents. I think it's vital that research seeks a balanced picture, examining both the opportunities and the risks, therefore, and I argue that to do this, it's important to understand children's perspectives, to see the risks in their terms and according to their priorities.

Even more difficult, and perhaps unfashionable, I also think that we should question some of children's judgments - they may laugh off exposure to images that may harm them long-term, for example, or they may not realise how the competition to gain numerous online friends makes others feel excluded or hurt.

Last, and I do like to be led in part by the evidence, I have been very struck by the finding that experiences of opportunities and risks are positively associated. Initially, I had thought that when children got engaged in learning or creativity or networking online, they would be more skilled and so know how to avoid the various risks online. But my research made clear that quite the opposite occurs - the more you gain in digital literacy, the more you benefit and the more difficult situations you may come up against.

As I observed before, partly this is about the design of the online environment - to join Facebook, you must disclose personal information, and once you've done that you may receive hostile as well as valuable contacts; to seek out useful health advice, you must search for key words that may result in misleading or manipulative information. And so on. This is why I'm trying to call attention to how young people's literacy must be understood in the context of what I'm calling the legibility of the interface.

You argue that we should be more attentive to the affordances of new media than

its impacts. How are you distinguishing between these two approaches?

Many of us have argued for some time now that the concept of 'impacts' seems to treat the internet (or any technology) as if it came from outer space, uninfluenced by human (or social and political) understandings. Of course it doesn't. So, the concept of affordances usefully recognises that the online environment has been conceived, designed and marketed with certain uses and users in mind, and with certain benefits (influence, profits, whatever) going to the producer.

Affordances also recognises that interfaces or technologies don't determine consequences 100%, though they may be influential, strongly guiding or framing or preferring one use or one interpretation over another. That's not to say that I'd rule out all questions of consequences, more that we need to find more subtle ways of asking the questions here. Problematically too, there is still very little research that looks long-term at changes associated with the widespread use of the internet, making it surprisingly hard to say whether, for example, my children's childhood is really so different from mine was, and why.

Sonia Livingstone is Professor in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of fourteen books and many academic articles and chapters on media audiences, children and the internet, domestic contexts of media use and media literacy. Recent books include Audiences and Publics (2005), The Handbook of New Media (edited, with Leah Lievrouw, Sage, 2006), Media Consumption and Public Engagement (with Nick Couldry and Tim Markham, Palgrave, 2007) and The International Handbook of Children, Media and Culture (edited, with Kirsten Drotner, Sage, 2008). She was President of the International Communication Association 2007-8.

Communal Growing Pains: Fandom and the Evolution of Street Fighter

This is another in a series of essays by my CMS graduate students exploring what personal narrative might contribute to the development of media theory. In this case, Begy blurs the line between games research and fan studies to talk about how he reads the Street Fighter games. Communal Growing Pains: Fandom and the Evolution of Street Fighter

By Jason Begy

Invasion

In mid October 2007, Japanese game developer Capcom announced what many fans, myself included, thought they never would: the fourth series in the long-running Street Fighter franchise. It had been some eight years since the release of the last official installment, Street Fighter III: Third Strike, and the declining popularity of 2D fighting games made another entry seem unlikely. The announcement of the new game generated enormous buzz within the community: for years whenever Capcom mentioned "unannounced projects" our collective heart skipped a beat, only to be disappointed. This time our wishes were granted, but we were ill-prepared for the full ramifications.

The online focal point of the Street Fighter community is the forum at Shoryuken.com. Here fans gather to discuss strategy (for Street Fighter and countless other fighting games), organize local meet-ups and online matches, share fan fiction and fan art, buy and sell all manner of goods, and generally hang out. The forums are known to be somewhat rough: new members are expected to quickly figure things out on their own. This is partially because many of the members are expert players and they come to interact with each other, not guide beginners through the basics. The community is at once tightly-knit and tightly-wound, which makes gaining acceptance extremely difficult yet extremely rewarding.

When Street Fighter IV was released on February 17, 2009 in the United States, all of the gaming press pointed to Shoryuken.com as the place to go for information, strategies, and tips, and the forums were literally and figuratively crippled. Literally because the servers could not handle the traffic, causing the site to continuously crash for several weeks; figuratively because many of the new members created severe social disruption. The best way to illustrate this is probably an analogy: imagine a thousand people spontaneously showing up at Gary Kasparov's house demanding to know how the pawn moves and you are not far off. The publicity also drew in countless trolls simply looking to cause trouble. This influx lead to the phrase "09er," which is derogatory slang for members who joined in 2009. It generally means someone who is disruptive, ignorant, and a fair-weather fan. This is not to say that all new members exhibited such behavior, but a great many did.

External tensions aside, the new members have created conflicting emotions in myself and other older fans. On the one hand, our genre of choice has been declining for nearly fifteen years, so a major new release and public approval is a nice affirmation of our tastes. Furthermore, fighting games are fundamentally social. Playing against other people is the only way to experience these games to their fullest, so a large group of new, eager players is certainly a welcome sight. On the other hand, these new members are quick to say that they have "always" been fans, which usually means they played Street Fighter II (the most popular game in the series) and not the eleven or so games between then and now, which begs the question of whether they will jump ship again when they get bored.

While it sounds strange, I find such statements deeply troubling: to leap from one entry to another while maintaining that you have "always" been a fan is to completely disregard what makes Street Fighter special. But even worse they cast a shadow of doubt over my own status as a "fan."

Origin

The source of these feelings is rooted in my own long history with the Street Fighter franchise. I first encountered Street Fighter II sometime in early elementary school and was immediately mesmerized. It was like nothing I had experienced before: two characters face off in one-on-one martial arts combat, first to win two rounds wins the match. The game could be played against a computer-controlled opponent or against another person. To control their character each player had an eight-way joystick and six attack buttons, corresponding to three punches and three kicks of different speed and strength.

In addition to their basic punches and kicks, each of the eight characters had a variety of "special moves" that were activated via special sequences of directional inputs and button presses. The inputs for these special moves were not given to the players, who were left to discover them for themselves. Each character also had a variety of "combos." A combo is a sequence of normal and special moves that is uninterruptible and usually requires a higher degree of skill to execute. These too were different for each character and left to the players to discover.

I am not sure what it was exactly that I found so compelling. I certainly found the game fun, but there was something else. The Street Fighter characters themselves were unique: each was full of personality, hailing from different countries and having different fighting styles. Each character's punches, kicks, combos and special moves were different, often drastically so. (Well mostly different anyway, back then Ken and Ryu were practically identical, but I will return to their divergent evolution later.) This meant that the experience of playing the game was dependent on the character used, leading to a great deal of variability.

As time wore on, my interest in the game waned; I became focused on other games and activities, and the series carried on without me. While I was peripherally aware of the new games and spin-offs, I was not particularly interested. Then during my sophomore year of college some friends introduced me to Street Fighter Alpha 3. This was the first Street Fighter game I had played in at least five years. In many ways Alpha 3 is far beyond Street Fighter II: the graphics and sound are far superior, there are many more characters, and the combat system is much deeper. My introduction to this game brought two significant realizations. The first was that I still loved playing Street Fighter, and the second was that I had missed out on a lot.

While I was ignoring Street Fighter, Capcom had been quite prolific in the genre. In total five Street Fighter II games were released, followed by four Street Fighter Alpha games, and three Street Fighter III games. There were also two spinoff series: Marvel vs Capcom and Capcom vs SNK. The former series saw four releases, and pitted characters from Street Fighter and other Capcom franchises against characters from the Marvel universe. These games were preceded by two Marvel-only fighting games. The latter series saw two releases, and included characters from Street Fighter and various SNK-developed fighting games (SNK is another Japanese game developer famous for their 2D fighting games). The games were not released in the order I have listed them here, rather multiple series were simultaneously "current." For example, Street Fighter Alpha 3 was released after the first Street Fighter III game. Needless to say this was an enormous amount of content, and since my initial exposure to Alpha 3 I have invested a lot of time, money and effort locating, acquiring and playing all of these games.

Reflection

I recognize that the story of my own "return" to Street Fighter is not unlike those I labeled "invaders" into the community. To be fair, to dedicate oneself to a single genre for fifteen years is to severely limit one's gaming experiences, and one can hardly be blamed for wanting to play other games. For me personally, as I aspire to be a scholar of the medium, devoting large amounts of time to a single genre becomes counter-productive. So am I not in some ways also a fair-weather fan, devoting time and attention when I can, or is convenient? I have not played seriously for almost two years now, and have never played in a tournament setting. These are troubling questions: who am I to say who is or is not a fan when I myself ignored Street Fighter for so many years? When I no longer have the time to dedicate to the game? Do I have a right to call myself a fan, and if so, to distinguish between established fans and newcomers? Something of an answer, I hope, lies in what I have learned by exploring the series' development.

In playing all of the old games, I discovered that just as the series as a whole has a history, so do the game's characters, some of whom have been included in every entry. In each game every character has his or her own story, which changes from game to game. Ryu's story in Street Fighter II is not the same as in Street Fighter III; it is not even consistent between the various entries in each series. A character's story in a game is presented at the end of the single-player mode, after the player has defeated his or her final opponent. As such a given game will contain many contradictory stories, resulting in the continual question of what is or is not canon. However, these ongoing narratives are far less significant than the formal history of the characters.

In a long-running, multi-branched series like Street Fighter there is a constant tension between providing new content and maintaining the brand. For 2D fighters in particular there is also the question of character balance: in an ideal world all characters are equally powerful and viable, yet provide unique play experiences. This is of course impossible, and the games are constantly being adjusted to improve game balance. Characters are added and removed with each release; those that stick around never play exactly the same way twice. Moves and combos are added, removed, and altered. Each character thus has two stories: the traditional story shown when the game is beaten, and the history of their mechanics. The fun of finding and learning long-forgotten Street Fighter games is tracing this history of form, which tells the story of the characters' development in a much more direct and immediate way than a traditional narrative. By looking at these games in sequence one can literally watch a character grow and evolve, learning new techniques, altering the old, removing the ineffective.

Sometimes this mode of storytelling is more intentional than others. The characters Ken and Ryu are perfect examples. In Street Fighter I these two are the only selectable characters; in terms of mechanics they are identical. In Street Fighter II there were eight selectable characters, but Ken and Ryu were still identical: they had the same attacks and special moves, and were distinguishable only by minor differences in appearance. As the Street Fighter II series progressed, Ken and Ryu slowly drifted apart. Ken became weaker and faster, while Ryu became slower and stronger. While these changes were originally intended to create greater variability in the gameplay, they began to become incorporated in the backstory as well. Ken became the hot-headed American, Ryu the stoic Japanese warrior.

While this evolution is interesting, it creates an inherent contradiction. As discussed above, Ken and Ryu were mechanically identical in the first two Street Fighter games. Later on the Street Fighter Alpha series was released, and Ken and Ryu's differences are fully realized. Yet, according to the diegetic narrative, the Alpha series occurs between Street Fighter I and Street Fighter II. Furthermore, games in the spinoff Marvel vs Capcom and Capcom vs SNK series were released alongside the main Street Fighter games, but are not part of the official chronology. So while characters were evolving throughout those games as well, their stories in them do not count in the larger narrative. As a result, the characters exist in two separate timelines: the formal timeline, which tracks the evolution of fighting game design, and the narrative timeline, which is the character's diegetic history. Consequently, players unfamiliar with the formal history miss the enormous amount of meaning being transmitted through the game's mechanics. There is much more meaning and information here than in the diegetic history because most of the latter is deemed non-canon.

This dualistic history then gives rise to the possibility of different "interpretive strategies," to borrow a phrase from Stanley Fish (168). Fish was interested in how readers make sense of texts, so in an application to video games it is worth noting that players make sense of both the fiction and mechanics of the game. In the case of Street Fighter, a player "interprets" both who the character is and how he or she functions in the game. For example, consider an experienced player sitting down to a new Street Fighter game. This player's interpretive strategy will likely be to apply franchise knowledge to this new game. The player may recognize the character Ken and interpret him as the "same" Ken from other games. When playing as Ken he or she will naturally look for special moves and combos that exist in other games and have carried over into the new game. The experienced player thus sees the characters are dynamic and evolving, an impression that becomes stronger as more games in the series are played.

A player new to the series, however, is more likely to see the characters as static, or will at least be unaware of any change. In the games themselves references to formal changes are very rare, almost nonexistent, hence new players can only interpret the character within the context of the one game. This is a conscious design choice: if Capcom required players to be familiar with prior games many potential new players would be alienated. As such in any given game the characters must seem complete enough to provide a satisfying experience and not confuse the player.

In Fish's terms one could say these two types of players belong to different "interpretive communities:"

Interpretive communities are made up of those who share interpretive strategies not for reading (in the conventional sense) but for writing texts, for constituting their properties and assigning their intentions. In other words, these strategies exist prior to the act of reading and therefore determine the shape of what is read rather than, as is usually assumed, the other way around

(Fish 171).

The two interpretive communities to which fans of Street Fighter belong can generally be described as those who base their understanding of a game on other Street Fighter games, and those who do not; or to put it a different way, those who see the characters as dynamic and those who see them as static.

As with readers of a text, players of a game will likely assign intentions to the author (the developer), in this case Capcom, and here we can see the difference between the two communities. The characters-are-dynamic community will assign intentionality based on formal changes from game to game. For example, if a combo is made harder to execute from one game to the next, this community assumes Capcom thought it was too powerful before, while the removal of a character indicates Capcom thought they were unpopular. As Fish says, such strategies exist prior to reading, or playing, because the player is already aware that some aspects of the game will be different (even if that assumption is based solely on the title it will almost certainly be correct). On the other hand, those who see the characters as static will likely assign intentionality differently because for them there is no prior context. As such each community "writes" their own version of a new Street Fighter game.

However, unlike the processes of interpreting literature that Fish was writing about, within the overall Street Fighter fan community there is a fairly consistent flow from one community to the other. Currently there are many people playing Street Fighter IV who are not familiar with any other game in the franchise, but as soon as they play a second Street Fighter game they will look for familiar characters and try similar strategies, thus beginning movement to the other community. In this instance Fish's model breaks down because the characters-as-constant interpretation can be definitively disproven, whereas Fish was interested in how people can effectively maintain and defend drastically different interpretations of the same text. Even if there is disagreement within the Street Fighter community over the reasons for the change, the fact that the characters do change is fairly apparent. One could argue that Ken in Street Fighter II is not the same character as Ken in Street Fighter III, and hence there are two separate, constant characters named Ken, but this debate seems unlikely to arise amongst the fan community. Regardless it is clear that Capcom wants us to regard them as the same.

Conclusions

While I find these ideas fascinating, the question remains: am I a fan? Can one distinguish between a fan and someone who is merely interested? I may have just demonstrated a relatively large body of esoteric knowledge, but it is entirely possible to come to the same conclusions while despising these games. I think that, at the very least, I can say that the effort expended here qualifies me as fan of Street Fighter, even if not in the traditional sense. (This is sort of a Cartesian approach: I write obsessively, therefore I am.) This idea shows how fandom is a spectrum where the rewards gained are proportional to the investments made. By investing in the series as a whole one gains access to the multiple layers of meaning present in each game and acquires new interpretive strategies. However, different people will invest differently and should not be criticized for making different choices.

In the Street Fighter community new players are essential. They bring new challenges, new opportunities, and give Capcom more reason to keep Street Fighter alive. Right now there is a great fear that new and returning fans will eventually get bored and stop playing, just like they did after Street Fighter II. If they do it will prove to Capcom that there is no market for 2D fighting games anymore, and then there might never be another Street Fighter game. To prevent that the best thing is to be patient with newcomers and make them feel welcome, regardless of where they fall on the spectrum. Hopefully with time their investment in the series will grow and they will decide to stick around.

References

Fish, Stanley. Is There A Text In This Class? Cambridge: Harvard University Press, 1980.

Jason Begy graduated from Canisius College in Buffalo where he earned a BA in English (2004) and spent much of his time working for Canisius' Department of Information Technology Services. Begy's undergraduate thesis argued that the rules and mechanics of chess and go were a reflection of the religious traditions of Catholicism and Buddhism, respectively. In 2008, Begy completed an MS in Technical Communication at Northeastern University in Boston, where his coursework focused on information design for the Web and information architecture for internal corporate and university networks. When it comes to game studies, Begy would describe himself as a ludologist and as such believes that the best way to study games is through their rules and mechanics. Begy is part of the research team supporting the Singapore-MIT GAMBIT games lab.

The Radical Idea that Children are People

This post is another in a series of essays written by the graduate students in my Media Theory and Methods proseminar last term. They were asked to try their hands at integrating autobiographical perspectives into theorizing contemporary media practices. As noted previously, the result was a strong emphasis on the informal learning which takes place around participatory culture.

The Radical Idea that Children are People

by Flourish Klink

The original iMac is instantly recognizable. Its cute curvy body and its Bondi blue back are iconic; one might go so far as to say that it is the most iconic personal computer that has ever been released. For me, the Bondi blue iMac represents more than just a turning point in the fortunes of Apple Inc., or even a turning point in Americans' computing habits. It represents a key, unlocking the door of the adult world.

In 1999, I was twelve years old. All my friends were having their Bar and Bat Mitzvahs, and I was feeling more than a little left out. Since my family wasn't Jewish, and my mother wasn't quite enough of a hippie to hold a "moon party" to celebrate my menses, my parents decided to buy me an iMac for my twelfth birthday. Even though it was advertised as "affordable," I knew at age twelve that this was an exorbitantly costly present: a thousand three hundred dollars! It was too large a number for me to put it into any kind of context. (Now that I am older I can say: a thousand three hundred dollars is three months' rent on a crummy graduate student apartment, and it was probably more than that in 1999. Scheiße!)

The iMac itself, however, wasn't the important thing. I'd been around computers forever, and I knew what they could do: they could help me draw things, write things, calculate things, program things, blah, blah, blah. All that was exciting, but it got old fast. What was important was the cords that attached to the iMac. You see, I was about to become the only one of my friends to have an internet connection of her very own. No more arguing over whether I should hog the family computer long after my homework was finished. No more begging my father to hurry up so I could get online. Just me and the information superhighway, me and the vast world of online communities, me and all the knowledge I could possibly cram into my malleable young brain.

According to the Pew Internet and American Life project, a third of all teens share their media creations online with others. At twelve, I was ready to be part of that demographic. In fact, I was thrilled. Most of my friends didn't share my single-minded passion for fiction writing and textual exegesis. Actually, "textual exegesis" makes it sound like I was interested in Hemingway or Joyce or something equally high-minded. The fact is, my friends just weren't interested in chronicling the rules of spell casting in the Harry Potter world (you might say "Crucio!" to cast the Cruciatus Curse, but you never Crucio someone; rather, you Cruciate them). I didn't know it, because I didn't know anyone who was involved in the world of media fandom yet, but I was a budding fangirl.

As soon as that iMac came into my life, I began connecting with people online, exploring Harry Potter fan sites, joining mailing lists, posting fanfiction, making friends. The stories I wrote weren't very good - I was twelve years old, and I wanted to explore emotions that I had only the most inchoate and vague experience with. But my writing skills were good enough that I attracted the attention of not just other preteens but also adults, good enough that I was able to take my place in the online community as a valuable participant. In Situated Language and Learning: A Critique of Traditional Schooling, Jim Gee calls spaces like the Harry Potter fan community "affinity spaces," and cites their value as locations for learning.

My experiences support his claim. I couldn't tell you about almost anything I did in high school; a few fantastic teachers are easy to recall, but even the details of what I learned in their classes is fuzzy and dim. Yet I can remember the experience of getting feedback on my fanfiction as if it were yesterday; I can remember how much I struggled to write my first fanfiction novel, and I can remember reading Strunk and White's The Elements of Style because I translated it into Harry Potter terms ("Headmaster Dumbledore is a man of principle, and his principal goal is to keep Lord Voldemort from rising again," et cetera). I was driven to write, to read, to found a non-profit company, for heaven's sakes, all before I reached the age of sixteen. In comparison, my time in high school seems empty, void, a place-holder that let me get that precious diploma and hightail it to college as fast as possible.

I believe that my internet connection, as symbolized and enabled by that beautiful Bondi blue iMac, inspired me to pursue my goals - but I also believe that it helped me fill an enormous lack in my life. Trapped as I was in the suburbs, too young to drive and be mobile, I could not find a community where my own particular expertise was respected and valued. I felt trapped in my twelve-year-old body, frustrated that everyone around me saw me as a kid. (Actually, I wonder if I wouldn't have felt just as trapped even if I lived in an urban area, even if I was able to seek out other people like me in the physical world: "on the internet, nobody knows your a dog," but in person, everybody knows that you're only twelve.) My internet connection gave me the opportunity to try on a new role: the role of an fan author and editor. That role wasn't one that was tied to my "kid" status. Anyone could be a fan author, anyone could be a fan editor, and if I could do those things as well as anyone, I could earn the right to be just as important and respected as an adult.

Now, looking back through the mists of time, I spend a lot of time thinking about how I could help other kids have similar experiences to mine. If I could find some way to introduce teens to affinity spaces that would provide them room to learn and grow the way that Harry Potter fandom did for me, I'd do it in an instant. Unfortunately, you can't force anyone to discover an affinity space. As young, idealistic English teachers learn every day, just because you love a book doesn't mean you can make everyone else love it (sorry, Ms Christiansen; I still think that Harry Potter was more formative for me than The Catcher in the Rye). If I had discovered the online fan community through a class, I might still have liked it - but then, I might have rejected it, slotting it firmly into the category of "work" rather than "play."

Then, too, there's the problem of the digital divide. I felt awfully overlooked, sometimes even dehumanized and objectified, as a pretty little twelve-year-old, but I wasn't nearly as overlooked as a kid whose parents couldn't afford to buy her a shiny new iMac - and I wasn't anywhere near as overlooked as a kid who'd never gotten to interact with a computer at all, or a kid whose literacy skills were so poor that they couldn't participate effectively in online discussion. For privileged young me, the internet was a saving grace, but I was starting with so many advantages that it seems short-sighted to take me as a case study.

So what can I learn from my childhood experiences? What can I give youth that's as valuable to them as Bondi blue idol was to me? I think that the first answer has to be "don't give them anything." That power relationship has got to go. That's what the computer really did for me: it gave me access to a space where no adult could tell me what to do. In the Harry Potter books, Harry was taking on adult roles, taking on challenges that would be difficult for grown-ups even though he was only a kid; online, I was doing the same thing. Since then, though, I've - well - I've aged. I've become less and less likely to think of preteens as individuals with hopes, dreams, expertise, knowledge and more and more likely to think of them as kids. When I was 12, I never believed this day would come, but at 22, it's easy to forget how I felt ten years ago.

I can't give every preteen I meet a shiny new iMac, and I can't teach them how to use it, and I can't instill confidence in them, and I can't lead them by the hand into affinity spaces and make them like it. I can try to make it so that they don't need the same measure of escape that I did. I can try to make sure that I don't just slot them into the category of "child" and forget about them, and I can try to make sure that they know I respect, trust, and believe in them. I can do that much.

Flourish Klink co-founded one of the largest Harry Potter fan fiction sites, FictionAlley.org, a project which was nominated for a Webby in 2004 and a Prix Ars Electronica award in 2005. She was one of the young fan fiction writers interviewed for Convergence Culture, already identified as a key writer and editor while still in high school. Her undergraduate career focused on the classics and religion, interests that she learned to combine with her growing fascination with digital media and fan culture. She earned a BA in religion from Reed College in 2008, where her undergraduate thesis explored the question: Can one have a Catholic religious experience in virtual reality? The project ultimately centered on religious communities within Second Life. At MIT, Klink has become a valuable member of the Project NML team. Her personal website is at madelineklink.com.

Bouncing Off the Walls: Playing with Teen Identity

Off and on, over the next few weeks, I am going to be showcasing work produced last term for my Media Theory and Methods graduate prosem at MIT. In the class, we spend a good deal of time exploring how various theorists and critics situate themselves in relation to the cultural objects and processes they study. This issue surfaces especially in relation to ethnographic research but also matters when dealing with a range of critical practices, especially those which emerge from feminist or minority perspectives. I ask students to write one paper which forces them to tap into their own autobiographical experiences as they seek to theorize some larger aspect of contemporary culture. The results never cease to amaze me: this is the most personally engaged writing these students generate all year and each brings something fresh to my own understanding of popular media. This year, there was a strong emphasis on educational issues -- a biproduct of the work we have been doing through the New Media Literacies Project and the Education Arcade. Many of the students returned to moments in their life when they were learning how to become cultural participants, media makers, curators, or critics of popular media.

Bouncing Off the Walls: Playing with Teen Identity

by Hillary Kolos

If you've ever had the chance to observe a teenager use the web, it's likely one of their browser windows was open to their Myspace or Facebook profile. Teens are constantly updating and customizing their profiles online, adding photos and songs, and posting to each other's virtual "walls." While this could be interpreted as just playing around, these activities can also be a means for teens to construct and experiment with their identity. In particular, it can be a space for exploring one's gender identification and sexuality.

Gerry Bloustein proposed this view in her work on teen girls use of video to create personal representations. She notes:

"On the surface such attempts at representation...seemed like 'just play' but under closer scrutiny we can see specific strategies--'the human seriousness of play'-- providing insights into the way gendered subjectivity is performed." (Bloustein, 165)

Serious play for teens is not necessarily something new to the digital age. Adolescence is often considered a time when rules are relaxed and young people can experiment with who they are or want to be. As new technology emerges though, some chose to blame it for distracting youth from what they see as the more important things in life, like education, physical fitness, or family relations. But teens' playful activities, while fun, can often have the deeper purpose of identity construction, which may not be apparent to those who view play always as meaningless.

As a teen, my arena for play was primarily my bedroom. I remember once ripping out a black and white Calvin Klein ad from the latest issue of Vogue. In it, a young woman-not an All-American beauty, but striking in appearance-sat on the ground with her legs tucked under her. Her head was shaved and her face pierced. She wore just a black bra, a jean skirt and black tights. Most would not have read too much into this picture at all, but to me it represented a way of being, both in its content and form, that I wanted to emulate-down to earth, edgy, and beautiful.

At the time, I was a 14-year-old wannabe skater chick, living with my mom, dad, and brother in suburban Northern Virginia. Earlier, when I was 10, I had asked my parents to please take down the 1970's nursery-themed wallpaper on my walls and paint them pink. While my parents are very loving people, they aren't the quickest at finishing projects. So four years later (just enough time for me to outgrow my wall color preferences) I finally had a fully-painted pink room-and I totally hated it.

Tastes change, especially when you're a teen trying on new identities, but there was no way I could ask my parents to change the color of my walls again. Instead, I began a playful experiment: I decided to hang the Calvin Klein ad on my wall. From a young age I loved fashion. As a teen, I had several subscriptions to fashion magazines, including Vogue, Elle, Bazaar, Allure, and W. What if I used their pages to cover up the pink that was just so not me anymore? I'd start in the uppermost-left corner and work my way around the room. Sure there'd be some pink poking through, but eventually I'd be free from that oppressive color. I was over my pre-teen days of loving unicorns and Top 40. I wanted to make myself into a new kind of girl - pretty and cool, but different.

I began my experiment right before the Internet boom in the mid-90's. Email and AOL chat rooms were all the rage, but there was nothing like the social networking sites and new media tools that teenagers have today to express themselves. Using websites as their "walls," teenagers today construct identities using collages of photos, music, and text online. Sites like Flickr, blip.fm, and YouTube make it simple to gather media of all kinds under profiles which stand for who you want to be on the web. My teen years were similarly saturated with media. In my case it was cable TV, pop radio, and glossy magazine, but my options for organizing and presenting the bits of media I wanted to represent who I was were limited. I made my outlet my bedroom walls.

Selection

Teenagers' bedrooms are usually the only physical space they have all to themselves. I wanted anyone who walked into my room to know immediately the style I liked, the bands I thought were cool, and the boys I thought were hot. Not the deepest stuff, I know, but it was important to me then. My room was my identity lab. On my walls, I could play with how I wanted to be perceived by others, combining images to create something bigger than any single picture could depict.

My curatorial process didn't have any strict guidelines. In general, the pictures were of women. (Though a cute, male model made it in every once in a while.) I was picky about what I added and it took me about a year to fill up just one wall. While the clothes in each image were an important aspect, there was often something else about the photo that made it special enough to hang-an interesting use of color, a unique composition, or a model whose appearance broke with convention.

My selection pool was limited to the mainstream magazines my mother would buy for me. Though I wanted to portray myself as on the edge of the mainstream, I had very little access to alternative media. I lived about an hour from Washington, DC where a sizable independent movement was occurring in the local music scene. While I heard about this from friends at high school, my parents' strict curfew and exaggerated view of crime in the city prevented me from being a part of it. Instead, I spent time in my room creating my vision of the world I wanted to occupy. Using images from mass media, I created a collection of the most creative and attractive images I found and presented them on my walls.

Audience

But who exactly was I presenting this collection to? Who was my audience? I grew up in a neighborhood with few girls my age. My brother spent his time running a muck with a band of boys who lived down the street, while I busied myself inside with crafts, reading, and TV. Later in my adolescence, I attended a magnet high school that was 35 miles from my house, a distance great enough to prevent most friends from visiting me. The only people then who saw my room were my parents and the one local friend I had named Wendy. (She too had her walls covered in magazine pictures, but had more of a metal theme going.)

My mother was the person who saw my room the most. While I didn't realize it at the time, she was most likely my primary audience. She hated it when I started to hang pictures on the wall. She prided herself on having a neat house and thought the pictures made my room look cluttered and trashy. Starting around the age of 12 on, I, like most teen girls, had an antagonistic relationship with my mother. Nothing too drastic, just a constant misalignment of taste. At the time, I felt like the biggest problem was that she didn't understand me. I begged her to watch the TV show, My So-Called Life, with me because I strongly identified with the teenage main character, Angela, and her experiences. I thought maybe by watching the show my mother would understand me.

My mom never watched the show with me, and we rarely talked about things like fashion, music, or boys. She did however come into my room constantly to talk to me about other things or to clean. Since my mom and I didn't talk much about my interests, I had to force them on her visually. What better way to show your mother what you're into than to Scotch tape it to the walls of her house?

I was also my own audience. I stopped attending Catholic school around eighth grade, which is about the same time I started watching MTV. My worldview basically exploded wide open at that point. For the first time in my life, I saw that I could construct an identity with the clothes I wore and the music I listened to. Also my identity didn't have to be static, I could play with the possibilities. I was initially intrigued by grunge music... then indie rock... then techno... then punk and ska... then hardcore. For me, high school was a playground for trying on different alternative identities.

The fashion ads I put on my wall became an amalgam of styles, but, in reality, I could never afford the clothes in the ads. Instead, I began to shop in thrift stores and create my own mix of styles influenced by the ads. I had limited resources, both in terms of money and selection at the thrift stores, which forced me to be more creative with my outfits. My high school peers were very tolerant of different looks and I took the opportunity to experiment with my style.

Performance

Many are concerned with the images that fashion ads portray and their impact on young women, especially in terms of body image. The mid-nineties could possibly have been the height of this fear, as "heroin chic" ruled and a super-thin Kate Moss was on every other page of fashion magazines. I was lucky to be naturally tall and thin and thankfully escaped the desire to radically transform my body to match the fashion world's runway standards.

Instead, what I tried to emulate was the femininity in the photos. As edgy as some of the ads I hung on my wall were, they always possessed a sense of femininity and sexuality. Whether it was showing some skin or a wearing a flowing pant suit the women in the ads rarely represented traditionally masculine qualities. As I ventured into my teens years, I became less interested in being one of the boys and more interested in what it meant to be a woman.

I wanted a safe way to explore femininity so I tested the waters by dressing up and taking pictures. I did this solely in my room with my friend, Wendy, and it quickly became one of our favorite activities, better than our other options of wandering around Wal-mart or hanging out at Denny's. We'd pull together some of the more extraordinary thrift store items, put on a ridiculous amount of make-up, and do our hair in a way we'd never be seen with in public. As I looked back at pictures we took, I saw a variety of styles that we explored. Sometimes we went for goth with dark lips and black clothes. Other times we obviously had the Spice Girls in mind with uber-glam makeup and fancy dresses. No matter what the genre though, the clothes we chose were always tighter and more sparkly than the torn jeans and baggy t-shirts we wore to school.

These photo shoots were our way to perform and practice what it meant to us to be a woman. The images on my walls and those that we had seen on MTV served as a starting point. We then translated elements from them into our photos of ourselves working with what we had available to us in my room. One picture we took stood out to me. In it, I have made myself up to look like one of the pictures on my wall-one where the model is dressed in a kimono-like dress with her lips painted like a geisha. In our picture, I sit in a wicker chair with the ad hanging just over my shoulder, which, as I remember, was unintentional. My lips are painted similarly and I am sitting like the model (my dress and hair are way off). While I knew next to nothing about what a geisha was historically, I had a strong desire to perform the look of the ad. I wanted to see myself with those same lips, in the same position. I wanted to see if I could look like that kind of woman.

As a teen, I used many resources to play with new identities. Fashion ads served as inspiration. My walls were a place to exhibit them. I did also, on occasion, leave my room where I had other experiences that helped shape the woman I am today. But having a space of my own to play and then reflect was very important to my process of identity formation. What seemed like goofing off at the time was actually a process of exploring who I thought I was at the time, as well as who I thought I should be.

My experience in my room is one of countless examples of how teens use their available resources to explore potential identities through play. This kind of play can happen in private, but often young people use media to capture their experiments and share them with others. In this way, they can gauge reactions and refine their performances. I used my walls to reach a limited audience, but today teens can easily reach millions of people online and receive feedback instantly on how they represent themselves. It will be interesting to see the new possibilities, as well as the new concerns, that emerge as teens use new resources to play with their identities online.

Bloustien, Gerry. "'Ceci N'est Pas Une Jeune Fille': Videocams, Representation, and Othering in the Worlds of Teenage Girls." Hop on pop: the politics and pleasures of popular culture. Ed. Henry Jenkins, Tara McPherson, and Jane Shattuc. Durham: Duke University Press, 2002. 162-185.

Hillary Kolos completed a BFA at Tisch School of the Arts, NYU and worked in after-school programs, including one at the School of the Future, where she co-taught a high school filmmaking class. After graduating from college in 2002, she worked at a not-for-profit production company that produces documentaries on current issues in education for PBS. Seeking more experience in the classroom, she then worked as a media educator in New York City schools. She currently works as a media mentor for Adobe, advising teachers on how to incorporate media into their curricula. She was inspired to return to graduate school after reading the white paper produced by Project NML for the MacArthur Foundation. She has been working with NML this year around the classroom testing and refinement of our Teacher's Strategy Guide, "Reading in a Participatory Culture." She is currently developing a thesis centering on the gaming cultures of MIT, the notion of "geek mastery," and the gender dynamics of technical expertise. In the future, she hopes to work as a consultant to help teachers incorporate new media literacy skills into their classrooms.

Authoring and Exploring Vast Narratives: An Interview with Pat Harrigan and Noah Wardrip-Fruin (Part Three)

Are the "vast narratives" created under commercial conditions different from some of the avant garde experiments or eccentric art projects (Henry Darger) also discussed in the book? In other words, do artists think about such world building differently removed from the marketplace?

Artistic considerations can be opaque at the best of times, and that's especially true with someone like Darger. But it's probably safe to say that commercial considerations played no part in his mind. His work was obviously a very private, very internal process. As far as we know, no one but he even knew it existed until after he died. But it's impossible not to speculate, isn't it?--why someone would spend their life creating something like In the Realms of the Unreal. He's almost like a Borges character.

But getting back to the commercial considerations: Walter Jon Williams addresses this directly in his Third Person chapter, and goes into some detail about the commercial considerations of shared-world novels and novel franchises, and how they inform his artistic choices in different ways than his single-author series.

Monte Cook and Robin Laws also discuss this in regards to the tabletop RPG industry, and here we get into very interesting areas of artistic choice. Because what a tabletop RPG writer is doing is creating a kind of machine that other people can use to create stories. Speculatively, someone could write an entire RPG system from scratch, for their individual use, but they'd still be playing the system with other people. The primary consideration in any RPG design is: Does it work? In other words, does it create the kind of stories I want it to, in the way I want it to? And because the tabletop RPG hobby is an inherently social one, this question is very, very close to: Will other people want to play it?

Laws' essay touches pretty directly on the commercial considerations that go into publishers' decisions to go with one property or another, or create their own. And Cook's essay focuses on the sequence of choices a gamemaster has to make in order to enact a particular rules system for the players. What we still don't have much of, outside some of the other 2P and 3P essays (Hite, Hindmarch, Glancy, Stafford) are really nitty-gritty analyses of why designers have created particular rules systems. Why does Call of Cthulhu have a "Sanity" mechanism? Well, that's an easy one, but why, for instance, does Dogs in the Vineyard have a dice pool system, with which players "bet," "raise" and "call" against the gamemaster? Why does The Mountain Witch have a "Trust" mechanism? For every example like that, some designer or team of designers balanced genre appropriateness, individual preference, commercial potential, player familiarity, ease, elegance, playability, and on and on.

For comics, as much as we love them, there are serious narrative handicaps to anyone working within one of the established commercial universes. In particular, it's rare that anything ever truly ends in any real sense. Storylines wrap up, series get cancelled, characters die--but the universe spins on. It happens in this way because DC and Marvel can still make money from it. It takes a huge apparatus of creators, editors, printers, distributors, retailers, consumers, etc., to keep these universes functioning.

You see something analogous in MMOs, although in that case it's weighted much more heavily on the creative and consumer ends, with fewer middle steps. But in both MMOs and comics, there's an unslakeable thirst for new content. You can't just stop producing, or the whole thing dries up and blows away. The advantages MMOs have over comics in this regard are: 1) They are much, much more profitable, and 2) Consumers create a large part of the new content themselves, in the form of their characters, inter-character interactions, and user-created emergent storylines. Anyway, all of this exists in the marketplace, not the ivory tower; the final judgment is the commercial one.

Of course, the art world is also a marketplace--and even the competition for faculty positions (which support many of the more interdisciplinary and experimentally-oriented digital media artists) exerts what might be seen as a market-like pressure. But the pressures aren't the same as those for commercially-oriented vast narratives.

Comics and science fiction fans have long stressed continuity as a central organizing principle in vast story worlds. Yet, you close your introduction with the suggestion that continuity is only one of a range of factors structuring our experience of such stories. Can you describe some others?

"Continuity" is a byproduct of telling a bunch of stories within the same setting. If someone writes a stand-alone novel, she doesn't have to worry about it, except in the simplest sense of making sure that a character who dies on page 50 isn't alive again on page 200. It's only when an author writes a series of novels, or comics, or something else, or other people start writing in that world, or it otherwise grows longer and more complex, that continuity becomes an issue. On the most basic level, it's a sort of contract between author and reader, showing that you care enough to keep the details straight (and aren't engaged in a metafictional exercise or parallel-worlds plot). Too much sloppiness in this area breaks the trust and announces the story's fictionality too directly.

That said, in certain genres, like big comics universes, maintaining continuity is hilariously difficult, bordering on impossible. Grant Morrison is probably right when he says that continuity is mostly a distraction in big comics universes, and will be as long as characters are not allowed to age and die away. No one is going to kill off Batman permanently, no matter what happened in Final Crisis 6, just as Barry Allen, Hal Jordan, Oliver Queen, Superman and the others all came back from the dead.

This speaks to a wider problem in comics continuity--without any real endings, and with no meaningful change that can't be revised or done away with at any time, the DC and Marvel universes lack consequences. Any individual storyline might be good or bad, but because they all exist within this ceaseless flow of stories, any narrative power is slowly worn away. One of Pat's favorite DC storylines is Paul Levitz and Keith Giffen's 1984 "Legion of Supervillains" storyline, in which Karate Kid is killed. Now we see that Karate Kid is back in Countdown to Infinite Crisis. What does this do to our appreciation of the original story? Nothing has changed about the text, but now it's been robbed of permanent consequence, and Pat's pleasure in it is diminished. Maybe that's a shallow way of appreciating narrative, but few comics readers will deny that it's a significant part of their enjoyment. And not just comics: the same thing happens in all forms of storytelling. We don't know of any literary critic who appreciates the narrative twist with Mr. Boffin near the end of Our Mutual Friend. You feel cheated; it's arbitrary and it undermines everything that's gone before, and robs the story of what James Wood calls "final seriousness."

This is what made The Dark Knight Returns so powerful, when it was first published. By providing an ending to Batman's story, it cast its shadow both forward and backward over Batman's entire publication history. Suddenly it became possible to read a Batman story in light of where the character was ultimately going. Alan Moore tried to do the same sort of thing--provide a possible ending--for the entire DC universe in his unproduced Twilight of the Superheroes miniseries, a missed opportunity if there ever was one.

Even Agatha Christie recognized this, though her series novels are almost completely continuity-free, with Hercule Poirot and Miss Marple staying essentially static thoughout her uncountable novels. But she still wrote Curtain (and kept it in a bank vault for over 30 years, until a few months before her death) to provide an end to Poirot.

Maybe the best approach to comics is to view them, as Grant Morrison seems to, as existing in a sort of permanent mythological or legendary space, in which the importance lies in the relationships between the characters and the ritual reenactment of certain actions, and not in the movement of these characters through time. We're okay with Homer, Aeschylus, and Euripedes all giving us versions of the story of the House of Atreus, and we appreciate them on their own merits, as literary instantiations of the same story. We don't spend much time trying to reconcile the discontinuities.

Greg Stafford's 3P chapter discusses the process of distilling multiple sources of the Arthurian stories into a coherent, playable RPG campaign. This was a heroic undertaking, but it was possible because 1) Stafford had final authority to accept, reject, or reconcile discontinuous story elements, and 2) he was not working with a constantly-expanding data set, such as the DC Universe. The question is not so much "Could you coherently reconcile all of DC's continuity?" as, "Why would you bother?" Without meaningful consequence, it's better to view the whole universe as existing in a sort of timeless fugue state, with only transitory consequences.

Incidentally, Doctor Who exhibits a different strange mixture of semi-continuity, with irreconcilable story elements (e.g., the multiple histories of the Daleks) combined with actual, permanent consequences (e.g., the Doctor's regenerations). A lot could be said about this, and what it means for narrative reception, and there's certainly a lot of that discussion in Third Person, but we've gone on a bit long here already.

The issue of the "ending" is a recurring issue in the book with several essays promising us "my story never ends" or "world without end," while others point to the challenges of sustaining creative integrity given the unpredictible duration of television narratives. Does the idea of a "vast narrative" automatically raise questions about endings and other textual borders?

Perhaps not automatically, given that we're treating as "vast" projects that are both ambitious in scope and yet planned for a particular, bounded shape from early on. But it's a very common move for vast narrative projects to make, and it's probably an inherent part of those that are conceived as productive systems. Why turn the system off? Similarly, those that are connected closely to events in the world beyond their control, or which have important audience contributions, have something in their dynamics that resists not only the hard border (those are intentionally designed away) but also the ending. That's why we've seen audiences attempt to continue projects that the authors bring to an end. But, of course, that's just a current twist on an old phenomenon, one you've also seen in your work on fan cultures.

That said, and though it may betray a little stuffiness, Pat does prefer narratives that seem to have a traditional shape to them, with meaningful endings that pay off everything that's gone before. And Noah thinks this is essential to a certain kind of project, even if some of his favorite fictions (from Mrs. Dalloway to Psychonauts) succeed on different terms. Commonly, comics and television structures work heavily against traditional narrative closure, but for commercial reasons, not even interesting modernist, postmodern, or currently-experimental ones. Which is why it's so exciting to come across something like The Wire, which is a coherent literary work realized in the televisual medium, which until recently Pat at least didn't think possible.

What demands do "vast narratives" place on the people who read them? Is a significant portion of the reading public ready to confront those challenges?

At this point, the question might actually be whether the expanding end of the reading public is willing to take on something that isn't as vast as, say, the Harry Potter or Twilight books. Perhaps it's just our skewed viewpoint, but it seems like large fictional projects, which either start with novels or have them as part of a cross-media environment, are a key way the reading public is growing. This reminds Noah of how his experience of being in the university is changing, now that even graduate students often can't remember a time before the Web very clearly and most students think that games are "obviously" as important a media form as, say, television. Vast possibilities and large interaction spaces now seem a kind of media norm.

That said, the pleasures of our youths--e.g., reading Marvel and DC comics and playing Call of Cthulhu and Champions (not the forthcoming online version)--were pleasures that grew with extended engagement, with developing understanding and elaboration of fictional universes and their characters. Those could be thought of as "demands," but we didn't feel that way about them, and we don't have the sense that people today reading a long series of novels or playing a computer RPG for 50+ hours (without even being completionist) feel that way either.

T-t-t-that's all, folks!

Authoring and Exploring Vast Narratives: An Interview with Pat Harrigan and Noah Wardrip-Fruin (Part Two)

A reader asked me whether the book included a discussion of soap opera, which would seem to meet many criteria of vast narrative, but doesn't fall as squarely in the geek tradition as science fiction series like Doctor Who or superhero comics like Watchmen. Pat does include a brief note about his own experience watching soaps with his grandmother. What do you see as the relationship between "vast narratives" and the serial tradition more generally?

Soap opera is definitely a missed opportunity for us. We had intended to have at least one essay on the subject, but it fell by the wayside as our contributors came aboard and our word count ballooned. We had also intended to have more essays on more purely literary topics; as it stands, Bill McDonald's essay on Thomas Mann seems a little lonely in the middle of all that television. We had wanted at least an essay on Faulkner, probably one on Dickens, and some others. But it's exactly there that Third Person would have started to tip over into more traditional areas of literary history, theory, and narratology. We think one of the strengths of the series is the unexpected juxtaposition of very different fields and genres. So in the end, we opted more for the digital.

The serial tradition seems to us to be a huge and maybe indispensible part of most "vast narratives." Comic books and television especially follow very naturally from the serial tradition exemplified by Dickens. In all cases, the story unfolds in the public eye, as it were: David Copperfield appeared in monthly installments, as do most modern comic books; TV serials are generally weekly. In all cases there's ample opportunity for the public to respond to plot developments and offer feedback.

In David Copperfield, for instance, you have the strange character Miss Mowcher, who appears first as a rather sinister and repulsive figure, but when she reappears is pixie-ish, friendly, and plays a role in helping David. What had happened in the meantime is that the real-world analogue of Miss Mowcher (Catherine Dickens's foot doctor) had recognized herself in the installment and threatened to sue. And as we understand it, the characters of Ben on Lost and Helo on the new Battlestar Galactica were both intended to be short-term minor characters, but proved so popular with viewers that they were promoted to central recurring positions.

There are plenty of artistic problems that arise from serialized storytelling, one of the most serious of which is the potential for unbalancing the narrative. Writing an unserialized novel allows you to edit, revise and generally overhaul the story before the public sees it. To serialize a story forces you to go with your thoughts of the moment, which may change before you finish the story, whether because of new artistic ideas of your own or because of outside forces (TV cast changes, editorial shifts in direction, Miss Mowchers, etc.). The Wire is one of the strongest televised serials ever aired--arguably it's simply the best--and that show was blessed with a strong writing staff with long-term narrative plans, substantial freedom from editorial direction, and as far as we're aware, very few unplanned cast changes. David Simon and the other creators like to talk about Dickens in reference to the show, but The Wire is in fact much more narratively balanced and formal in structure than most of Dickens's novels.

At the same time, a lot of exciting art happens in exactly the improvisational space that seriality provides. The writing staff on David Milch's Deadwood seems to have, on a daily basis and under Milch's direction, group-improvised nearly all of the Deadwood scripts. The end result is a constantly surprising story that still somehow appears as a tightly-structured drama, even down to following, more often than not, the Aristotelian unities of time and place. (And we'd be remiss if we didn't mention that Sean O'Sullivan does great work discussing seriality both in his Third Person essay, and in his essay in David Lavery's collection Reading Deadwood.)

First Person experimented with placing a significant number of its essays on line and encouraging greater dialogue between the contributing authors. What did you learn from that experiment?

One thing we learned is that putting a book's contents online, which previously had mostly been done with monographs, could also work with edited collections. MIT Press was happy enough with the results that we followed this practice with Second Person and will do it again with Third Person. We'd like to see this practice expand in the world of academic publishing, since we now have some evidence that it doesn't make the economic model collapse (it's other things that are doing that, unfortunately, to some areas of academic publishing).

Another thing we learned is that, while blogs were already rising in prominence by the time we started working with Electronic Book Review on this portion of the project, the kind of conversation encouraged by something like EBR isn't obviated by the blogosphere. In general, blog conversation is pretty short-term. People tend to comment on the most recent post, or one that's still on the front page, and this is only in part because blog authors often turn off commenting for older posts, as an anti-spam measure. EBR, on the other hand, solicits and actively edits its "riposte" contributions (returning them to authors for expansion and revision, for example) and ends up fostering a kind of conversation that still moves more quickly than the letters section of a print journal, but with some greater deliberation and extension in time than generally happens on blogs. These different forms of online academic conversation end up complementing each other nicely.

As you note, comics have had a long history of managing complex narrative worlds. What lessons might comics have to offer the new digital entertainment media?

Digital media has already absorbed a lot of helpful lessons. In Third Person this can be seen in Matt Miller's chapter on City of Heroes and City of Villains, which goes into depth on how Cryptic translated comics tropes into workable MMO content.

The place to speculate might actually be the reverse of the question: what comics could take from contemporary digital media. We don't have any idea what a Comics Industry 2.0 would look like, but we suppose it's possible that DC and Marvel could take some of the pressure off themselves by integrating user-generated content of some sort; overseeing, funding and formalizing fan web sites, or who knows.

Every so often the industry does try something like this: back when we were growing up, there was a comic series called Dial "H" for Hero, in which a couple of kids had some sort of magic amulets that would turn them into different random superheroes when activated. The twist was that all of the names, costumes and powers of the heroes were reader-generated. Readers would send in letters with drawings and descriptions of superheroes they'd invented, and then those heroes would be integrated, with the appropriate credit, into later issues. This sounds extremely childish, and it was. There were no opportunities for readers to affect anything except the most replaceable elements of the story. (Although we do give DC credit for making it a boy-girl team, so that one of each pair of superheroes created would be female. Trying to build female readership is an ongoing problem for the big companies.) Later in the '80s, DC did give readers the opportunity to alter the narrative, when they ran the "A Death in Family" storyline in Batman. In this case, the Joker attacks, beats and blows up Jason Todd, the unlikeable second Robin, and DC established a 1-900 number which readers could call to vote on whether Todd lived or died. Well, they voted for him to die, and so he did, but the whole thing is regarded, rightly, as pretty distasteful, and they never bothered with anything like it again.

So the impulse toward interactivity exists in the industry, though it's never really gone anywhere. We suspect that some type of formalized interactivity will be a part of the comics industry going forward. What it will look like, we don't know.

More to Come

Authoring and Exploring Vast Narratives: An Interview with Pat Harrigan and Noah Wardrip-Fruin (Part One)

One of the first classes I will teach through my new position at USC will be Transmedia Storytelling and Entertainment. I've already started lining up an amazing slate of guest speakers and have put together a tentative syllabus in the class. The primary textbook will be Third Person: Authoring and Exploring Vast Narratives, which was edited by Pat Harrigan and Noah Wardrip-Fruin. Many of you who have been working with games studies classes may already know the first two volumes in the MIT Press series which Harrigan and Wardrip-Fruin have edited. I've been lucky enough to be included in two of the three books in the series: my essay "Game Design as Narrative Architecture" was included in First Person and my student, Sam Ford, interviewed me about continuity and multiplicity in contemporary superhero comics for Third Person. So, I am certainly biased, but I have found this series to be consistently outstanding.

A real strength is its inclusiveness. By that I mean, both that the editors reach out far and wide to bring together an eclectic mix of contributors, including journalists, academics, and creative artists working across a range of media, and I also mean that they have a much broader span of topics and perspectives represented than in any other games studies collection I know. They clearly understand contemporary games as contributing something important to a much broader set of changes in the ways our culture creates entertainment and tells stories.

For my money, Third Person is the richest of the three books to date and a very valuable contribution to the growing body of critical perspectives we have on what I call "Transmedia Entertainment", Christy Dena calls "Cross-Platform Entertainment", Frank Rose calls "Deep Media," and they call "vast narratives." Each of us is referring to a different part of the elephant but we are all pointing to an inter-related set of trends which are profoundly impacting how stories get told and circulated in the contemporary media landscape. I found myself reading through this collection in huge gulps, scarcely coming up for air, excited to be able to incorporate some of these materials into my class, and certain they will be informing my own future writing in this space.

And I immediately reached out to Pat and Noah about being interviewed for this blog. In the exchange that follows, the two editors speak in a single voice, much as they do in the introduction to the books, but they also signal some of their own differing backgrounds and interests around this topic. The interview is intended to place the new book in the context of the series as a whole, as well as to foreground some of the key discoveries that emerge through their creative and imaginative juxtapositions of different examples of "vast narratives."

Can you explain the relationship between the three books in the series? How has your conception of digital storytelling shifted over the series?

First Person was originally conceived as an attempt to reflect and influence the direction of the field, at a particular moment, while also trying to do some work toward broadening interdisciplinary conversation (in the vein of Noah and Nick Montfort's historically-focused New Media Reader). As such, most of the essays grew out of papers and panel discussions from conferences, especially Digital Arts and Culture and SIGGRAPH. This is also why we used the multi-threaded structure--in order to preserve some of the back-and-forth of ideas characteristic of any emerging field. Unfortunately the book didn't come out as quickly as we hoped, and we were a little worried that it would become more of a history. But it turned out that many of the issues the field was concerned with at the time (e.g., the ludology/narratology stuff) remained, and still remain, things that people entering the field have to think through--so readers still find the book useful today.

That said, we learned an important lesson about the potential for delay, and about thinking of the long-term relevance of a project, so for Second Person we very consciously tried to commission a book that we didn't conceive of as trying to influence the conversation of a particular moment. Pat was working at Fantasy Flight Games when 1P was released, and had been thinking a lot about the relationship of stories to games, especially board games and tabletop RPGs. We both thought it would be an interesting area to explore, especially considering that there wasn't much out there, to our knowledge, that covered similar ground. So the idea was to explicitly draw connections between hobby games, digital media, and other similar performance structures (like improvisational theater) and meaning-making systems (like artificial intelligence research). It was much less "of the moment" than 1P and to our minds, that's when the series really started to take its shape.

Third Person wound up being something of a hybrid of the first two books. Like 2P, it addresses some underserved areas of game design and experience--such as Matt Kirschenbaum's essay on tabletop wargames--but again we're trying a bit to change the terms of the discussion, arguing for a broader conception of our topics. While 2P may have been one of the first books to integrate real discussion of tabletop and live performance games with computer games, its concept is one that goes down easily with most people in the field (we even got reviewed in Game Developer magazine). 3P is a bit of a challenge to digitally-oriented people who think about their field as "new"--or exclusively concerned with issues related to computational systems--because we believe people making digital work have something to learn from people doing television, comic books, novels and the other forms discussed in the book. And we also believe there's something to be learned in the opposite direction as well, and from continuing to connect projects from "high art" and commercial sources. We're very curious to see what the reception turns out to be for this volume, which we view as completing a kind of trilogy.

One striking feature of this series has been the intermingling of perspectives from creative artists and scholars. What do you think each brings to our understanding of these topics? Why do you think it is important to create a dialogue between theory and practice?

Broadly speaking, our scholarly essays often provide a big-picture view of a subject, providing context and analysis, and our artists' essays provide a more detail-oriented, granular view, usually of just a single work or small number of works. Inevitably these distinctions become pretty blurry; for example, we intended John Tynes's 2P essay to be strictly about the Delta Green design process, but he wound up providing a wide-ranging, highly analytical piece about game design philosophy--which is wonderful! Later, in 3P, we gave Delta Green co-creator Adam Scott Glancy the same mandate, and got something of the same result, with a history of the Delta Green property mixed in with wider ideas of narrative strategy.

This is one of the benefits of getting all these contributors side by side in the same series of books; you can see ideas from one person reflected in very different contexts, or, in the case of Delta Green, how the somewhat different design philosophies of two of the three Delta Green creators combined to create the property. This is then situated in the larger context created by the contributions of other creators and scholars, working in a variety of forms related to our themes, resulting in something far richer than one author could deliver.

Incidentally, one notable thing we've found about hobby games designers, is that they're very willing to talk about what goes into their design process, but they're seldom asked! That's a result of the anemic academic attention paid to the field. For literary critics, a novelist's or poet's design process, philosophy, and narrative strategies are all legitimate areas of study (even if "author studies" is now rather out of fashion). Even video game designers are getting some respect these days. But the hobby games industry is too small, it seems, to have merited much attention. This despite the fact that many current video game designers started in the hobby games field: Tynes, Greg Costikyan, Ken Rolston, Eric Goldberg, etc.

While a central focus of the books has been on digital media, especially games, you have always sought to define the topics broadly enough to be able to include work on other kinds of media. In the case of Third Person, these include science fiction novels, comic books, and television series. What do we learn by reading the digital in relation to these other storytelling tradition?

When we talk about "digital media" or "computational media," we're talking about something that is both media and part of a computational system (usually software). As we see it, the lessons digital projects can learn from non-digital projects are both in their aspects that are akin to traditional media (for example, how they handle stories and universes constructed by multiple authors) and in their systems (how they function--and how these operations shape audience experience). The articulation between the two, of course, is key.

We're certainly not the first people to note this. For example, it's been suggested (Noah remembers hearing it first from Australian media scholar Adrian Miles) that digital media creators often fret about a problem well known to soap opera authors: What to do with an audience who may miss unpredictable parts of the experience? Obviously the problem isn't exactly the same, because one case is organized around time (audiences may miss episodes or portions of episodes) and the other is organized by more varied interaction (e.g., selective navigation around a larger space). But there is a common authorial move that can be made in both instances: Finding ways to present any major narrative information in different ways in multiple contexts, so that the result isn't boring for those who see things encyclopedically and doesn't make those with less complete experiences feel they've lost the thread.

Of course, what the above formulation leaves out is that this problem doesn't have to be solved purely on the media authoring side, and perhaps isn't best solved there. Another approach is to design the computational system to ensure that the necessary narrative experiences are had, as appropriate for the path taken by any particular audience. This requires thinking through the authorial problem ("How do we present this in many different contexts?"). But ideally it also involves moving that authoring problem to the system level ("How can we design a component of this system that will appropriately deliver this narrative information in many different contexts, rather than having to write each permutation by hand?"). And, if successful, you don't have to solve the difficult authoring problem of keeping your audience from being bored because they're getting variations on the same narrative information over and over. Then you can use the attention they're giving you to present something more.

Obviously, this isn't easy to do. Computationally-driven forms of vast narrative are still rapidly evolving (at least on the research end of things). But the basic issues are ones that non-digital media have addressed in a rich variety of ways. Even the question of what kinds of experiences one might create in this "vast" space is one that we need to think about broadly--it's a mistake to think we already know the answer--and looking at non-digital work broadly is a part of that.

You write, "Today we are in the process of discovering what narrative potentials are opened by computation's vastness." Is that what gives urgency to this focus on "authoring and exploring vast narratives"?

Personally, that's an important part of our interest. But it's certainly not the only source of urgency. As the variety of chapters in the book chronicles, in part, we're currently seeing exciting creativity in many forms of vast narrative. One might argue that something enabled by computers--digital distribution--is part of the reason for this (e.g., television audiences and producers are perhaps more willing to invest in vast narrative projects when "missing an episode" is less of a concern). But we think of this as distinct from things enabled by computation (permutation, interaction, etc.), especially because some systems (such as tabletop games) carry out their computation through human effort, rather than electronically.

How are you defining "vast narratives"? What relationship do you see between this concept and what others are calling "transmedia storytelling," "deep media," or "crossplatform entertainment"?

Definition isn't a major focus of our project, but there are certain elements of vast narrative that especially attract our attention.

First, we're interested in what we call "narrative extent," which we think of as works that exceed the normal narrative patterns for works of a particular sort. So, for example, The Wire doesn't have that many episodes as police procedurals go (CSI has many more), but it attains unusual narrative extent by making the season--or arguably the entire run of five seasons--rather than the episode, the meaningful boundary.

Second, vast narrative is interesting to us in the many projects that confront issues of world and character continuity. Often this connects to practices of collaborative authorship--including those in which the authors work in a manner separated in time and space, and in many cases with unequal power (e.g., licensor and licensee).

Third, and connected to the previous, we're interested in large cross-media narrative projects, especially those in which one media form is not privileged over the others. So, for example, the universe of Doctor Who is canonically expanded by television, of course, but also by novels and audio plays. On the other end of the spectrum, Richard Grossman's Breeze Avenue project includes a 3-million-word, 4,000 volume novel, as well as forms as different as a website and a performance with an instrument constructed from 13 automobiles--all conceived as one project.

Fourth, the types of computational possibilities we've discussed a bit already, which are present not only in games (we have essays from prominent designers and interpreters of both computer and tabletop games) but also in electronic literature projects and the simulated spaces of virtual reality and virtual worlds.

Fifth, multiplayer/audience interaction is a way of expanding narrative experiences to vast dimensions that we've included in all three books--including alternate-reality, massively-multiplayer, and tabletop role-playing games. Here the possibilities for collaborative construction and performance are connected to those enabled by computational systems (game structures are fundamentally computational) but exceed them in a variety of ways.

Given all of this, it's probably fair to say that our interests are a superset of some of the other concepts you mention. For example, your writing on transmedia storytelling certainly informs our thinking about vast narrative--but something like a tabletop RPG campaign is "vast" for us without being "transmedia" for you.

Patrick Harrigan is a Minneapolis-based writer and editor. He has worked on new media projects with Improv Technologies, Weatherwood Company, and Wrecking Ball Productions, and as Marketing Director and Creative Developer for Fantasy Flight Games. He is the co-editor of The Art of H. P. Lovecraft's Cthulhu Mythos (2006, with Brian Wood), and the MIT Press volumes Third Person: Authoring and Exploring Vast Narratives (2009), Second Person: Role-Playing and Story in Games and Playable Media (2007), and First Person: New Media as Story, Performance and Game (2004), all with Noah Wardrip-Fruin. He has also written a novel, Lost Clusters (2005).

Noah Wardrip-Fruin works as a digital media creator, critic, and technology researcher with a particular interest in fiction and playability. His projects have been presented by conferences, galleries, arts festivals, and the Whitney and Guggenheim museums. He is author of the forthcoming Expressive Processing: Digital Fictions, Computer Games, and Software Studies(2009) and has edited four books, including Second Person: Role-Playing and Story in Games and Playable Media (2007), with Pat Harrigan, and The New Media Reader (2003), with Nick Montfort. He is currently an Assistant Professor with the Expressive Intelligence Studio in the Department of Computer Science at the University of California, Santa Cruz.

My Secret Life as a Klingon (Part Two)

So, there's a second trip out to Hollywood, this time in order to try on the actual costumes, to make sure that they fit. And I got to wander around through the costume racks, taking note of references to a Cantina sequence and a Vulcan Tea Ceremony, among other things. I overheard the people working there chatting about what color lingerie the blue-skinned Orion girl should wear for the movie. (Pink really would have been a bad choice!) And I got fit for my costume. Now, by this point, I was starting to get a little anxious about how I am going to pull off a Klingon part when the other Klingons were a good foot taller than me, sometimes more, and most of them naturally had much broader builds. I was going to be the scrawniest Klingon in the Galaxy. They kept reassuring me that they would build me up through the padded costume, though I am fully aware that they are going to be using padded costumes for the other guys too, so we were locked into an armour race that I was never going to win.

That said, the costume they gave me was breathtaking. They had designed helmets for the extras to wear which have built in head-bumps so that they wouldn't have to spend hours in a make-up chair with each of us. I had a floor length great coat made out of a rubbery material designed to look like elephant skin or some alien equivalent. I have big shiny black boots.

Once I put all of this on and looked in the mirror, I felt Klingon down to the souls of my feet.

But there was one small problem: the pants they gave me were way too baggy and kept sliding down. There's a reason why I always wear suspenders and it's only partially a fashion statement. They took my measurements again and then promise me that they will take up the pants more so this won't be a problem on the set. After all, this is the whole reason why I've flown out to LA just to do a costume fitting and am about to fly back to teach class the following morning.

A week later, I met the other cast and crew of the film on the piers at Long Beach for what was going to be an all night long shoot at the secret location they have transformed into a Klingon prison compound. There was an army of us sitting there, waiting, eating the best array of junk food I've ever seen, and trying to cope with what promises to be a "hurry up and wait" kind of evening. There was a minor crisis when the casting director comes around to ask us to take off our jewelry and I realize that there's no way I can take off my wedding ring. It's not that I wasn't willing but after almost 30 years of marriage, my finger has grown around it, and it would take a jeweler's saw to cut it off me. Luckily, just as they were about to throw me off the set, I remembered that my character is supposed to be wearing heavy black gloves and so no one will ever see my ring finger, and they let it pass.

We were led back to the make-up tent, where I spent about half an hour in the chair, as they blacken the bottom part of my face and add a bristle goatee on top of my already scraggly looking beard. From here, we were supposed to wear robes and hoods so that the spoilers who were camped out around the location can't take our pictures. Once we got into costumes and make-up, we began to separate ourselves off by our races: the Klingons start to hang out with the Klingons, the Romulans with the Romulans, and then there are all of the other prisoners who represent an array of classic Trek races, including a guy in a really spectacular costume as a Salt Vampire.

Once everyone is in make-up, costume, and robe, we all wereloaded onto a bus and driven some distance away. As we steped off the bus, I set eyes on the set for the first time -- there were cameras on cranes and huge lighting units; there were synthetic boulders and giant fans blowing across the set; and there were massive fire pits in the ground which erupted into flames as the crew test the equipment. It's about this point that it occurs to me that Klingons are not known for their designer eye-wear and that I am very nearsighted. This was going to be the first and last chance I was going to get to see the set in focus. A few minutes later, someone circulated through and asked those of us who are visually impaired to remove our glasses.

You can ask me if J.J. Abrams was on the set that night and I couldn't tell you because I never saw him. I did hear the amplified voice of someone who was directing the scene coming down from on high. I never met the man, though people kept saying that I really should see if I could meet him, if he had specifically asked for me in the movie. It was clear some of the other extras in the scene were there because they had been hardcore fans of the series. Some bragged that they had also done extra work for Battlestar, Star Wars, and even Doctor Who, so some of these fans get around. By this point, there were persistent rumors that I speak fluent Klingon. I do not. I barely speak English and have no gift for foreign languages. And even before I get into conversations with anyone, they are already calling me "the Professor." I suppose that being a professor isn't something I do: it's who I am. In any case, it seemed that when people heard I had written a book on Star Trek, the only mental image they had was that I had written a book on the Klingon language.

They moved us out on the set and gave us our positions. We weren't told very much about what's happening in the scene. Everything is on a need-to-know basis. All we know is that we are Klingons who are guarding prisoners and that things are falling from the sky and exploding all around us. We were told that if we really got into our characters, we'd have a much stronger chance of ending up on screen in the final film, and there was a roving camera just trying to grab expressive closeups. We got no instruction on how to hold our weapons and as I look around, its clear that there's not exactly trained consistency in things like whether guards hold the gun barrel pointing down or up. Some of the guys had military training and we consult with them trying to at least understand human practices in this regard. I don't think I realized before how much extras really are improvising, creating their own characters, with very limited attention from the production staff. I find myself much more attentive watching extras in the backgrounds of shots having gone through this experience. But many of us had real fear that nit-picking fan boys were going to nail us for not holding our weapons the Klingon way!

And then they start staging a range of different vignettes -- at one point, I am trying to keep a group of increasingly unruly prisoners at bay using a disrupter rifle; at another point, I am on guard duty looking out over the prison complex. The most spectacular moment came when I was handed a torch (which are heavier than they look!) and told to lead a group across the compound as the wind blows down upon us and things are blowing up on other sides. Of course, being near sighted, I can't see more than a few feet ahead of me, so the group was zig-zagging like crazy as I try to avoid getting myself blown to bits or running into the blades of the giant fans. There was a real look of terror on my face for those sequences! I know I caused more than a little frustration for the assistant director who is trying to stage this little scene.

And, oh yes, my pants kept sliding lower and lower down my butt: at first, it was hip hop style but in one scene, I had to grab my waist to keep my pants from sliding off altogether. I suppose that the Klingon army like other military organizations is indifferent to matching guards with the right size uniforms. Periodically throughout the evening, I had to have a costume girl try yet again to stitch up the costume so it didn't slide off me. But they never seemed to fully solve the issue.

By this point, between my clumsiness with the guns, my near-sightedness, my slight size, and my baggy pants, I am starting to think of myself much more as a comic than a heroic figure. I am K'henry the Hapless! Fear my fumbles!

As the evening went along, everything starts to become more and more casual. The Salt Vampire is letting us feel his rubby tentacles and everyone seems to want to hold my disrupter. If at first we sorted ourselves by race, we start to just collapse in the green room between takes, indifferent to whoever is sitting next to us. If at first we take everything too seriously, a row of Klingons started singing "I Feel Pretty" from West Side Story or doing the "Crank Dat Soulja Boy" dance.

At one point, they planted me on a rock to wait for instructions and forgot about me in the fog of war. I ended up dozing off in the wee hours of the morning and woke up vaguely disoriented, sitting in a Klingon prison compound, holding a disrupter in my hands.

At another point, they lined us all up in various action poses for photographs and we started to joke that we were posing for the action figures, and indeed, the set up reminded me of those little green army guys I played with as a kid.

Somehow, we all managed to stay more or less awake through the night, though I gradually started to feel a level of exhaustion I hadn't felt in decades. They loaded us on the buses, collected our costumes, and sent us along the way.

No, I didn't meet any members of the cast, though I did see some of the Romulans characters with tatooed faces and so I am starting to wonder if one of them was Nero. No, I never met J.J. Abrams. And No, I don't have any photographs of myself dressed as a Klingon. They didn't allow any cameras on the set because they didn't want any of us leaking images prematurely to the media.

I had been telling friends that I had played one of the classic alien races in the film: some imagined a Vulcan, some suggested a Ferengi, but for months, there were no reference to Klingons in the build up to the movie, there was no Klingon footage in the previews, and I got really anxious. I knew from the beginning that as an extra in a scene which involved more than 60 extras, my odds of ending up on screen were pretty small, and I had to keep lowering expectations from the students and staffs who imagined something bigger. I figured that once we had some footage of Klingons, I could start to tell people, but I didn't want to be the blogger who spilled the beans. Eventually, Abrams announced through the blogosphere that he was going to cut the Klingon sequence from the film: "There was a big Klingon subplot in this and we actually ended up having to pull it out because it confused the story in a way that I thought was very cool but unnecessary. So we have these beautiful designs that we're going to have to wait and do elsewhere I guess."

I've read various reasons for his decision, having to do with trying to streamline the character motivations, trying to avoid confusion about the current relationship between Klingons and the Federation for those viewers who only know some of the later Treks where the Klingons are our friends, and having to do with keeping the opening of the film crisp and taunt. It's pretty clear from the dialogue included more or less where the Klingon sequence would have gone. And I'm personally hoping we get to see this footage as a DVD extra.

My biggest disappointment is that we probably will never see Klingon action figures for this film. I had fantasies of getting a figurine of a Klingon in a floor-length elephantine coat holding either a torch or a disruptor.

So, now you have it, the saga of K'Henry the Hapless, the most scrawny Klingon in the Galaxy, and how he ended up on the cutting room floor.

My Secret Life as a Klingon (Part One)

klingonJenkins.jpg Artist's Approximation created by Ivan Askwith

At long last, I can share with you, oh loyal reader, the utterly true, sometimes comical story of how I became a card-carrying Klingon in the new Star Trek film (well, almost). I've been itching to share this yarn for the past year and a half but had wanted to wait until the film was in the theaters and many of you would have had a chance to see it.

The adventure began with an unexpected e-mail: a Hollywood casting director wrote me to say that J.J. Abrams wanted to include me in the then upcoming Star Trek reboot. At first, to be honest, I thought it was a joke. I had no idea that J.J. Abrams knew who I was. We had not and still haven't ever had any direct contact with each other, though my mind starts to race trying to figure out the chain of events which might have led him to discover me. Might J.J. be a reader of this blog?

My loyal and trustworthy assistant, Amanda, did some followup and got on the phone with the Hollywood type to try to determine what would be involved in shooting "my" scene for the movie. Doing so would require me to take three trips to Los Angeles in a little under a month -- not a small demand given the number of long-standing commitments I had -- and I would need to do so on my own dime. What I was being offered was a chance to become an extra and in Hollywood, in some cases, as I would discover, extras are literally recruited off the streets, and all of them are paid only a minimal wage.

The idea of a full professor at MIT flying to Hollywood to appear as an extra was absurd, but given my life-long love of this particular media franchise, which had inspired two of my books and several more articles, not flying to LA to be an extra in a freaking Star Trek movie would have been equally absurd.

I had to do it, even though it meant postponing some significant meetings, ducking out early from academic conferences, and taking a series of red eye flights, not to mention spending several thousand dollars. I have often joked about boldly going where no humanities scholar has ever gone before and this was going to be a wild ride.

So, I flew out to Hollywood and made my way, straight from the airport, to the Paramount Studio backlots, dragging my suitcase behind me. I was greeted by the casting agent, and was then led along with an army of other people out to what literally amounted to a cattle call. I was lined up against the wall with about fifty or sixty other men as people with clipboards moved along the line, discarding some, shifting some to another wall, and otherwise sorting us out into smaller groups. I was trying to make sense of the patterns: along my wall were men who are for the most part bald and have ample facial hair. So far, I fit the category they were looking for.

But then I became acutely aware that I needed to strain my neck to see the tops of the other men's heads. Most of them looked like they were tall enough to play professional basketball and most of them were black. Indeed, by the time the sorting out process was done, I was the shortest, whitest guy left standing. They then took us one by one into a dressing room area to take our measurements and to get us to try on some costumes for size. I was fit with some heavy leather gloves, some pants which looked like they come from a military uniform, some tall black boots, and a helmet. I glanced down at a clipboard when the costumer wasn't looking and saw the notice, "Klingon Guard," and my heart beat a bit faster. It wasn't until the second trip out to Hollywood that the costumers confirmed that I was indeed going to be given a chance to play a Klingon part. (Indeed, some of the other extras only learned they were in a Star Trek movie when they arrived on the set for our actual shoot.)

Now, keep in mind that being a Klingon has been one of my life-long ambitions. When I was in high school, I went to the DeKalb County Honors Camp, where I majored in drama. I spent the summer in the company of some of the most wacky friends I ever had, doing skits and plays, and when we were not doing that, just cutting up in the hallways. One of the girls in our cohort was a hardcore Trek fan. At this point, I had watched the series as a casual viewer but I had not taken the plunge. But she decided she was going to adapt the script from David Gerrold's "Trouble with Tribbles" for the stage and we were all going to play parts. I met a guy, Edward McNalley (who is still one of my best friends) when he got pulled in from another group to play Spock. I was cast as the Klingon officer who sparks a bar fight with the Enterprise crew when he insults first its captain and then the ship itself. In getting ready to play the part, I started reading every book I could find on the series -- The Making of Star Trek, The World of Star Trek, Star Trek Lives, and of course, the James Blish novelizations of all of the episodes, even the photonovels and the viewmaster slides. That's how you kept up on a series back in the days before any of us had a VCR, though my wife still has audio tapes recorded through alligator clips attached to the television sound system, which she recorded when the series was first being aired. It was through all of this reading that I discovered not only Star Trek but also the fan culture around it.

Flash forward several decades to when I was doing research for Science Fiction Audiences, the book I wrote with John Tulloch. That's when I became a Klingon for a second time. I was trying to do research on Klingon fan culture as a contrast to the female fanzine writers, the GLBT actvists, and the MIT students who figured prominently in that study. In true participant observation fashion, I joined a Klingon role-playing group, seeking to better understand what it was like to walk that particular swagger. In many ways, this Klingon fandom was a branch of the men's movement which was taking shape around Robert Bly's Iron John. Most of those I met were working class men who were embracing a warrior mythology to work through anger and frustrations they had encountered in life. Both men and women involved struck me as experimenting with power and trying to reclaim aspects of masculinity which they saw as under threat elsewhere in the culture. In the end, my research on Klingons was a failed project which never found its way into the final book.

I never really could figure out how to perform Klingon masculinity in a convincing manner and I got lost in the role-play activity. I had been cast as a Klingon ambassador, which I took to be an oxymoron, and so I was proceeding by insulting and abusing the Federation ambassadors with whom I was interacting, much as my character in "Trouble with Tribbles" had intentionally picked a fight with the Enterprise crew. But the guy representing the Federation took it all too personally, could never grasp that I was playing a character, that we were operating in a magic circle, and eventually filed a protest against me, which led to the Klingon high council suggesting that I step down from my post. I guess I played too rough to be a Klingon, go figure.

Skip forward a few more years and I'm being profiled in the Chronicle of Higher Education. The photographer is scoping out my living room when he stumbles on my Bat'leth, a Klingon battle sword, which I have propped up against my fireplace. And he asks if I would be willing to pose with it for a photograph. As a long-time fan, I smell a trap. After all, I've written critically about the ways news coverage depicts fans in costumes with program-related trinkets as people who can't separate fantasy from reality. Even with the release of the new film, I am reading lots of prose about "rubber Vulcan ears" and the like, despite two decades of trying to dismantle those hurtful cliches. But I also relished the absurdity of appearing in the Chronicle of Higher Education showing off my Klingon cutlery and so, once again, in for a penny, in for a pound.

So, given that history, I can't tell you the excitement I felt when I called my wife, a fellow lifelong Trekker, to tell her that I was about to become an official Klingon. She was jealous, of course; what wife wouldn't be? But she also was really supportive of this fantasy-fulfilling opportunity.

Next Time: Going on Set, Shooting the Scene, and How the Klingons Ended Up on the Cutting Room Floor.

Five Ways to Start a Conversation About the New Star Trek Film

Spoiler Warning: The following post assumes you saw the new Star Trek film this weekend. If you didn't, you probably shouldn't be reading this post. You should be heading to a multiplex. Cynthia and I went to see the new Star Trek film this weekend. We have managed to see every Star Trek film together as a couple on opening weekend since the film franchise lost with Star Trek: The Motionless Picture in 1979.

So, the two of us proceeded to spend the better part of the evening going through the film scene by scene armed with a lifetime of fan and critical perspectives on the franchise, trying to figure out what it signals about the future of Trek.

We certainly went into the film with high hopes but also with a certain sense of dread. J.J. Abrams has worked hard to demonstrate to the world that "this is not your father's Star Trek," and the problem is that we are, well, sorta, when you look at our birth certificates and all, part of 'your father''s generation. People like 'Your Father' and even more likely 'Your Mother' have kept Star Trek a viable franchise for more than four decades. None of us object to bringing in new souls for the faith or attracting younger followers but you don't have to write off the old fans to do so.

We certainly were not opposed to the recasting of cherished characters: quite the opposite, many of the franchises we care about -- Robin Hood, Sherlock Holmes, Cyrano, Hamlet, Sam Spade -- have been recast many times with differing results but always with new discoveries to be made. We certainly hoped that having someone other than William Shatner playing the part would rekindle our respect and affection for Kirk, as a character, for example, while we remained skeptical that a new actor could capture the complexity which Leonard Nimoy has achieved through his portrayal of Spock through the years. As a fan of the new Battlestar Galactica series, I'd be hypocritical if I objected to them rethinking the characters or revamping the worlds depicted on the series.

When Cynthia was asked what she thought upon walking out of the theater, she responded that it felt like a Star Trek movie precisely because there were things we loved and things we hated about it. It's been like that from the beginning and it will always be thus.

Rather than write a review of the film, though, I figured I'd throw out some discussion topics. After all, it's exam season around here and so the genre of essay questions comes readily to hand. The following are some of the things we've been debating since we saw the film:

1. For us, the coolest thing in the movie was the image of Vulcan educational practice, which is consistent with previous representations (most notably the scenes of Spock retooling himself in Star Trek III) but also gave us new insights. Vulcans seemingly learn in isolation yet immersed in a rich media landscape. Each climbs down into a well surrounded by screens which flash information, allowing them to progress at their own rate, dig deeper into those things which interest them, and at the same time, develop a certain degree of autonomy from other learners. There are no teachers, at least none represented in the segment we are shown here, but rather the individual learner engaging with a rich set of information appliances. In some ways, this is the future which many educators fear -- one where they have been displaced by the machine. In other ways, it is the future we hope for - one where there are no limits placed on the potentials of individual learners to advance.

But if learning is individualized, why do people come together into what can only be described as a school? Why not locate the learning pod in each home? Why have a structured school day?

In the midst of all of this well-considered if somewhat alien pedagogy, we are introduced to the issue of Spock's bullying by his classmates. The scene where he confronts the Bullies is oddly ritualized, as if he was reporting to them for today's insults and abuses, and as if they were testing his ability to develop the toughness and emotional control to push aside those insults. It's clear elsewhere that he faces a certain degree of prejudice as a result of his half-human/half-Vulcan background -- see the casual deployment of race as a handicap as he is admitted to the Vulcan Science Academy. But here, it is as if there is a system of ritualized bullying designed to test and toughen each student. What if bullying was incorporated into the pedagogical regime as it is more or less in several other educational systems on our planet? Certainly the content of the insults would be different in each case, but the logic of ritualized insults as a way of developing emotional control is not actually alien to the way Earth cultures operate.

2. I've read reviews which suggest that the Uhura in this film represents a progressive reworking of the character from classic Trek. I'm not convinced yet, even though I very much liked the actress who played the part. However limited her role might be ("hailing frequencies are open, Captain"), the original Uhura was defined first and foremost by her contributions as a member of the Enterprise Crew. Whatever subtext there was suggesting a Kirk/Uhura romance, it was just that -- a subtext -- left for fans to infer from a few telling moments in the trajectory of the series, among them, the first interracial kiss on American television -- albeit executed under mind control -- albeit an implied projection of one or both of the character's actual desires.

In the new film, Uhura asserts her professional competence but she never really demonstrates it. How does that make her different from many of the female professionals in classic Trek who are introduced in terms of their professional abilities and then reduced to being the girlfriend of the week for one of the primary characters? Here, more screen time is devoted to her but she's ultimately a love object in some kind of still to be explored romantic triangle between Kirk and Spock. Basically, she's been inserted into the story to discourage fans from writing slash stories, though most of us won't have any trouble figuring out how the exchange of women facilitates an expression of homosocial/homoerotic desire.

The classic definition of a Mary Sue is someone who is claimed to have extraordinary mental abilities, who manages to gain the romantic interests of multiple members of the crew, and who manages to have the information needed to save the ship. In way sense, then, is the new Uhura anything other than a Mary Sue figure in the body of an established character? Surely after forty plus years, Trek can imagine a more compelling female character.

3. I'm still trying to make sense of the implications of Kirk's absurdly rapid rise to command in this version of the story. In the past, we were allowed to admire Kirk for being the youngest Star Fleet captain in Federation history because there was some belief that he had managed to actually earn that rank. Here, he manages to gain command in large part because Captain Pike was an old family friend, and because he had one really successful mission. It's hard to imagine any military system on our planet which would promote someone to a command rank in the way depicted in the film. In doing so, it detracts from Kirk's accomplishments rather than making him seem more heroic. This is further compromised by the fact that we are also promoting all of his friends and letting them go around the universe on a ship together.

We could have imagined a series of several films which showed Kirk and his classmates moving up through the ranks, much as the story might be told by Patrick O'Brien or in the Hornblower series. We could see him learn through mentors, we could seem the partnerships form over time, we could watch the characters grow into themselves, make rookie mistakes, learn how to do the things we see in the older series, and so forth. In comics, we'd call this a Year One story and it's well trod space in the superhero genre at this point.

But there's an impatience here to give these characters everything we want for them without delays, without having to work for it. It's this sense of entitlement which makes this new Kirk as obnoxious as the William Shatner version. What it does do, however, is create a much flatter model for the command of the ship. If there is no age and experience difference between the various crew members, if Kirk is captain because Spock had a really bad day, then the characters are much closer to being equals than on the old version of the series.

This may be closer to our contemporary understanding of how good organizations work -- let's think of it as the Enterprise as a start-up company where a bunch of old college buddies decide they can pool their skills and work together to achieve their mutual dreams. This is not the model of how command worked in other Star Trek series, of course, and it certainly isn't the way military organizations work, but it is very much what I see as some of my students graduate and start to figure out their point of entry into the creative industries.

4. If the narrative makes it all look too easy for the characters, the narrational structure makes it much too easy for the viewers. There's a tendency not so much to ask questions as to hand us answers to the questions fans have been struggling with over the past four decades. So, for example, classic Trek was always carefully not to fully explain how Sarek and Amanda got together, allowing Vulcan restraint to prevent Sarek from fully articulating what he feels towards Spock's mother. As a consequence, there were countless fan fiction narratives trying to imagine how Sarek and Amanda got together -- Jean Lorrah, for my money, wrote the best of these narratives, though there were other great fan novels out there on precisely this theme. Yet, here, the question is asked and answered, overtly, in a single scene.

Ditto the issue of whether Vulcans are incapable of feeling emotion on some biological level or if they have simply developed mental discipline to bring their emotions under their control. Again, this question inspired decades of fan fiction writing and speculation and is here dispatched with a few short sentences.

The mystique that surrounded Spock from the start had to do with things he was feeling but could not express: he is a deeply divided character, one who broods about where he belongs and how he relates to the other Enterprise crewmembers. But this film makes it look ridiculously easy for him to get a girl friend and he is surprisingly comfortable necking with his pretty in the transporter room, an act that it is impossible to imagine Spock prime doing. The original Spock was a deeply private person. It isn't that the new film has made Spock Sexy. The old Spock was a whole lot sexier than the new Spock for all of his hidden depths and emotional uncertainties: the new Spock is just too easy all around and there's no real mystery there. He isn't sexy; he's having sex and that's not the same thing at all.

5. As a stand alone film, it's reasonably engaging: I like most of the cast and think they achieve good chemistry together. The pace is, as has been suggested, good, though most of the action scenes -- except for the free fall sequence -- seem pretty average. It's a flawed work but I'm certainly in for more adventures. My problem is that the film didn't give us much to anticipate for the sequel. In answering its mysteries so easily and not setting up new ones, there's just not that much room for speculation and anticipation.

This would work if it were the pilot episode of a new television series. I haven't loved any of the pilot episodes but they gave me enough reasons to like the characters that I kept watching. It usually takes a good number of episodes for the cast to jell with their characters, for the writers to figure out what they are doing, and for the audience to figure out what is distinctive about the new series. I think I need more momentum to get over the hump than a movie every few years and that's why television would have worked better to relaunch the franchise than a feature film is going to do.

Is this a space where transmedia storytelling practices can create a bridge between this film and the next? Is there other ways that they can allow us to have encounters with these characters as embodied by the new cast? If so, what strategies will be the most effective at strengthening what ever level of identification was created for this new film?

Finally, if there are new fans who are created through this relaunch of Star Trek, which is certainly what Abrams and company are claiming is their goals, what has the film left them to do? What are the gaps and kernels they will work with? It's clear enough what the cultural attractor here is but what is the cultural activator?

Then, again, there's nothing wrong with this film that couldn't have been improved by the addition of Klingons. I will explain later in the week.

What Is Learning in a Participatory Culture? (Part Two)

Today, I am running the second part of an essay written by Erin Reilly, the Research Director of the New Media Literacies Project (NML) in which she tells you more about our new learning library. If you have not yet checked out the learning library, you can find it here. And if you want to learn more about how it is starting to be deployed across a range of educational settings, check out the special issue of Threshold magazine about "Learning in a Participatory Culture."

Exploring New Media Literacies

My work on Zoey's Room was an ideal segue to applying practice to Project NML's research into how a participatory culture facilitates learning in the 21st century. Outside their classrooms, which largely still follow a top-down model of teachers dispensing knowledge, today's children learn by searching and gathering clusters of information as they move seamlessly between their physical and virtual spaces. Knowledge is acquired through multiple new tools and processes as kids accrue information that is visual, aural, musical, interactive, abstract, and concrete and then remix it into their own storehouse of knowledge. Describing how learning and pedagogy must change in this new cultural and multimedia context, the think tank New London Group argues that "literacy pedagogy now must account for the burgeoning variety of text forms associated with information and multimedia technologies."

Indeed, they describe how "the proliferation of communications channels and media supports" sets up a need for "creating the learning conditions for full social participation." The media-literacy movement has effectively taken the lead among educators in this regard by teaching students to analyze the media they consume and to see themselves as both consumers and producers of media. However, even this learning often is relegated to electives or to after-school programs rather than being integrated across curricula. The new media literacies allow us to think in very different ways about the processes of learning, because they acknowledge a shift from the top-down model to one that invokes all voices and all means of thinking and creating to build new knowledge. For many educators, however, this raises issues of maintaining control, building trust, and providing an open-source culture of learning that allows students to share their own expertise in the classroom. At the same time, the mindsets and skill sets of the new media literacies are changing the discipline itself. In effect, we are teaching an outdated version of literacy if we do not address the sorts of practices that new media and new technologies support.

Invitation to Participate

Integrating the new media literacies into learning echoes the concept of syndesis presented by social anthropologist Robert Plant Armstrong in "What's Red, White, and Blue and Syndetic?" (1982). Syndesis is a process that strings together self-contained moments or increments of what Armstrong calls "presence" to form a whole. Syndesis has important applications to today's learning environment because it ensures that educators and students contribute to the body of knowledge being formed by the group. The end result is an environment that shares information in multiple formats that become similar only when the group pulls them together.

One major approach to the new learning paradigm at Project NML is the Learning Library , a new type of learning environment that embraces the characteristics of syndesis and participatory culture. The Learning Library is an activities-based model that aggregates media from the Web--such as a video, image, or audio file--and provides tools for users to integrate that media into a learning objective. Educators are encouraged to load their own media or draw on media by others that already exist in the Library to shape new learning challenges and to collaboratively build and share new collections based on particular themes. These challenges range from playing a physics game designed to experiment with problem-solving, to developing collaborative ways to bring innovation into the classroom, to learning about attribution while exploring issues involving copyright, public domain, fair use, and Creative Commons.

Project NML has seeded the Learning Library with its first collection of 30 learning "challenges" so that users can explore and practice applying the new media literacies to their classroom activities. One example from our first collection of challenges, called Expressing Characters, uses the new media literacy of transmedia navigation. In this activity, a student learns how plot can be extended across media by following the adventures of Claire Bennet, a character from the TV show Heroes. After exploring how Claire is already portrayed on television, in a graphic novel, and on MySpace, learners practice transmedia navigation by adapting and extending one of their own favorite characters into media forms in which the character does not currently exist. Bringing their own experiences to this challenge, students then load their creations into the Library, where they can be viewed and remixed into a different learning objective by others. By exploring and practicing the new media literacy skill of transmedia navigation, students learn to make meanings across different media types--not just in relation to print text. In this way, these new modes of communication are highlighting the need to teach new ways of expression and new methods of understanding the digital world.

Conclusion

A prime goal of Project NML is to understand what happens when multiple forms of media are fully integrated into processes of learning. The new media literacies build upon existing print literacy practices, making possible new literacy practices where, according to the New London Group, "the textual is also related to the visual, the audio, the spatial, the behavioral, and so on." And these practices offer new resources and pathways for learning the disciplines.

Our students are already appropriating information from the Web and turning it into new knowledge. They are already learning from each other and participating in the learning of their peers. They already connect, create, collaborate, and circulate information through new media. The goal for us, as educators, is to find new ways to harness and leverage their interests and social competencies to establish a participatory learning environment. Teachers and administrators must learn to leverage this new learning paradigm to engage our students, and we encourage you to use the Learning Library and see if it works for your context.

Resources

Armstrong, Robert Plant. "What's Red, White, and Blue and Syndetic?" Journal of American Folklore, 1982.

Building the Field of Digital Media and Learning. MacArthur Foundation.

Jenkins, Henry et al. "Confronting the Challenges of Participatory Culture: Media Education for the 21st Century." MacArthur Foundation, October 2006. digitallearning.macfound.org

The New London Group. "A Pedagogy of Multiliteracies: Designing Social Futures." In Multiliteracies: Literacy Learning and the Design of Social Futures, edited by Bill Cope and Mary Kalantzis. Routledge, November 1999.

Erin Reilly is a recognized expert in the design and development of educational content powered by virtual learning and new media applications. As research director of MIT's Project New Media Literacies, Reilly helps conceptualize the vision of the program and develop a strategy for its implementation. Before joining MIT, Reilly co-created Zoey's Room, a national online community for 10- to 14-year-old girls, encouraging their creativity through science, technology, engineering, and math. In 2007, Reilly received a Cable's Leaders in Learning Award for her innovative approach to learning and was selected as one of the National School Boards Association's "20 to Watch" educators.

Pew Internet & American Life Project.

Zoey's Room.

"Geeking Out" For Democracy (Part Two)

A close look at the recent presidential election shows that young people are more politically engaged now than at any point since the end of the Vietnam War era. 54.5 percent of Americans ages 18 to 29 voted last November, constituting a larger proportion of the total electorate -- 18 percent -- then Putnam's bowlers, people 65-years-and-older (16 percent). The youth vote was a decisive factor in Obama's victories in several states, including Indiana, North Carolina, and possibly Florida. John Della Volpe, director of polling for the Harvard Institute of Politics, told U.S. News and World Reports that the desire to make the world a better place was "baked into the millennials' DNA" but "they just didn't believe they could do that by voting." Political scientist Lance Bennett has argued that unlike Putnam's bowlers, this generation's civic identities are not necessarily defined through notions of "duty" or through once-every-four-years rituals like voting; rather, he argues, they are drawn towards "consumerism, community volunteering, or transnational activism" as mechanisms through which to impact the larger society. The Obama campaign was able to create an ongoing relationship with these new voters, connecting across every available media platform. Log onto YouTube and Obama was there in political advertisements, news clips, comedy sketches, and music videos, some created by the campaign, some generated by his supporters. Pick up your mobile phone and Obama was there with text messages updating young voters daily. Go to Facebook and Obama was there, creating multiple ways for voters to affiliate with the campaign and each other. Pick up a video game controller and Obama was there, taking out advertisement space inside several popular games. Turn on your Tivo to watch a late night comedy news show and Obama and his people are there, recognizing that The Daily Show or Colbert are the places where young people go to learn more about current events. This new approach to politics came naturally to a candidate who has fought to be able to use his Blackberry and text-messaging as he enters the White House, who regularly listens to his iPod, who knows how to give a Vulcan salute, brags about reading Harry Potter books to his daughters, and who casually talks about catching up on news online. The Obama campaign asked young people to participate, gave them chances to express themselves, enabled them to connect with each other, and allowed them to feel some sense of emotional ownership over the political process.

What has all of this to do with schools? Alas, frequently, very little.

Let's imagine a learning ecology in which the youth acquires new information through all available channels and through every social encounter. The child learns through schools and after school programs; the child learns on their own through the home and family and through their social interactions with their peers. They learn through face to face encounters and through online communities. They learn through work and they learn through play. The skills they acquire through one space helps them master core content in another. Through the New Media Literacy project, we have been developing resources which can be deployed in the classroom, in afterschool programs, and in the home for self-learning, seeking a more integrated perspective on what it means to learn in a networked society. Yet, right now, most of our schools are closing their gates to those cultural practices and forms of informal learning that young people value outside the classroom and in the process, they may be abdicating their historic roles in fostering civic engagement.

In a 2003 report, CIRCLE and the Carnegie Corporation of New York sought to document and analyze "the civic mission of schools." Historically, schools had been a key institution in fostering a sense of civic engagement. While their parents were bowling, their children were getting involved in student governments, editing the student newspaper, and discussing public affairs in their civics classes. The Civic Mission of Schools reports: "Long term studies of Americans show that those who participate in extracurricular activities in high school remain more civically engaged than their contemporaries even decades later.... A long tradition of research suggests that giving students more opportunities to participate in the management of their own classrooms and schools builds their civic skills and attitudes.....Recent evidence indicates that simulations of voting, trials, legislative deliberation, and diplomacy in schools can lead to heightened political knowledge and interest." Yet, the committee that authored the report ended up sharply divided about how realistic it was to imagine schools, as they are currently constituted, giving young people greater opportunities to participate in school governance or freedom to share their values and beliefs with each other. Student journalism programs are being defunded and in many cases, the content of the student newspaper is more tightly regulated than ever before. Schools no longer offer opportunities for students to actively debate public affairs out of fear of a push-back from politically sensitive parents.

In reality, young people have much greater opportunities to learn these civic skills outside school, as they "hang out," "mess around," and "geek out" online. This may be why so many of them use social network sites as resources to expand their contact with their friends at school or why they feel such a greater sense of investment in their game guilds than in their student governments, or why they see YouTube as a better place to express themselves than the school literature magazine. Meanwhile, our schools are making it harder for teachers and students to integrate these materials into the classroom. Federal law has imposed mandatory filters on networked computers in schools and public libraries. There have been a series of attempts to pass legislation banning access to social network sites and blogging tools. Many teachers have told Project New Media Literacies that they can't access YouTube or other web 2.0 sites on their school computers. And the Student Press Law Center reports that a growing number of schools have taken disciplinary action against students because of things they've written on blogs published outside school hours, off school grounds, and through their own computers.

In other words, rather than promoting the skills and ethical responsibilities that will enable more meaningful participation in future civic life, many schools have sought to close down opportunities to engage with these new technologies and cultural practices. Of course, many young people, as the Digital Youth Project discovered, work around these restrictions (and in the process, find one more reason to disobey the adults in their lives). Yet, many other young people have no opportunities to engage with these virtual worlds, to enter these social networks, on their own. These school policies have amplified the already serious participation gap that separates information-haves and have-nots. Those students who have the richest online lives are being stripped of their best modes of learning as they pass into the schoolhouse and those who have limited experiences outside of classroom hours are being left further behind. And all of them are being told two things: that what they do in their online lives has nothing to do with the things they are learning in school; and that what they are learning in school has little or nothing of value to contribute to who they are once the bell rings.

One of the goals of Project New Media Literacies has been to bring this participatory culture into the classroom as a key first step towards fostering a more participatory democracy . This isn't a matter of making school more "entertaining" or dealing with wavering student attention. It has to do with modeling powerful new forms of civic life and learning, of helping young people acquire skills that they are going to need to enter the workplace, to participate in public policy debates, to express themselves creatively, and to change the world. As we are doing this work, we are bumping up, again and again, against constraints which make it impossible for even the most determined, dedicated, and informed teachers to bring many of these technologies and cultural practices into their classrooms. It isn't simply that young people know more about Facebook than their teachers; it is that for the past decade, schools have sought to insulate themselves from these sites of potential disruption and transformation, hermetically sealing themselves off from these social networks and from the mechanisms of participatory culture. The first we can overcome through better teacher training, but the second is going to require us to rethink basic school policies if schools are going to pursue their traditional civic missions in ways that enhance these new forms of citizenly engagement.

This article was written for Threshold Magazine's special issue on "Learning in a Participatory Culture." Read more about Project New Media Literacies here.

"Geeking Out" For Democracy (Part One)

On the eve of our conference at MIT on "Learning in a Participatory Culture," Cable in the Classroom has joined forces with Project New Media Literacies to edit a special issue of Threshold which centers on the work we've been doing and the vision behind it. Among the features are a wonderful graphic showing the new learning environment and how informal, individual, and school based learning can work together to reinforce the core social skills and cultural competencies we've been discussing; a transcribed conversation with Benjamin Stokes, Daniel T. Hickey, Barry Joseph, John Palfrey, and myself about the challenges and opportunities surrounding bringing new media into the classroom; James Bosco adopting a school reform perspective on these issues; and a range of pieces by the core researchers on our team describing what happened when we introduced some of our materials into schools or after school programs. If you wanted to attend the conference but just couldn't make it to Cambridge, you can follow along through the live webcasts of the event. Check here for details.

Over the next few weeks, I am going to be showcasing the work of Project New Media Literacies and introducing you to some of our curricular materials which are just now going public. Along the way, you will get a chance to read several pieces from the Threshold magazine, including one from our award-winning research director Erin Reilly, get some reflections from some of our students about how they learned about and through popular culture, and learn about how spreadability may impact education. Today and next time, I will be running the essay which I wrote for the magazine, which maps the ways I am starting to think about the relationship between participatory culture and participatory democracy.

And if that's not enough New Media Literacies thinking for you, check out this great podcast put together by Barry Joseph and others at Global Kids, one of our research partners, which includes a conversation between Mimi Ito and myself and an interview with Constance Steinkuehler.

"Geeking Out" For Democracy

by Henry Jenkins

In his book, Bowling Alone, sociologist Robert Putnam suggests that many members of the post-WWII generation discovered civic engagement at the local bowling alley. The bowling alley was a place where people gathered regularly not simply to play together, but talk about the personal and collective interests of the community, to form social ties and identify common interests. In a classic narrative of cultural decline, Putnam blames television for eroding these strong social ties, resulting in a world where people spent more time isolated in their homes and less time participating in shared activities with the larger community.

But what does civic engagement look like in the age of Facebook, YouTube, and World of Warcraft? All of these new platforms are reconnecting home-based media with larger communities, bridging between our public and private lives. All offer us a way to move from media consumption towards cultural participation.

During a recent visit in Santiago, I sat down with Chilean national Senator Fernando Flores Labra who believes that the guild structure in the massively multiplayer video game, World of Warcraft, offers an important training ground for the next generation of business and political leaders. (Guilds are affiliations of players who work together towards a common cause, such as battling the monsters or overcoming other enemies in the sword-and-sorcery realm depicted in the game.) The middle aged Labra, with his slicked back hair, his paunchy midsection, and his well-pressed suits, is probably not what you expect a World of Warcraft player to look like. Yet, he's someone who has spent, by his own estimate, "thousands of hours playing these games, with hundreds of people, of all ages, all over the world."

Labra recently invited leading business and political leaders to come together and learn more about such games, explaining: "I am convinced that these technologies can be excellent laboratories for learning the practices, skills and ethics required to succeed in today's global environment, where people are increasingly required to interact with people all over the world, but still have a hard time working with their colleagues in the office next door, never mind with their new colleagues, whom they have never met, on the other side of the world. If an organization is to survive and thrive in today's era of globalization, its leaders must ensure that members of their organization become experts in operational coordination among geographically and culturally diverse groups; build and cultivate trust among their various stakeholders, including their employees, their customers and their investors, all of whom may be culturally and geographically diverse; cultivate people that are able to act with leadership in an era of rapid and constant change."

Playing World of Warcraft requires the mobilization of a large number of participants and the coordination of efforts across a range of different skill groups. Experienced players find themselves logging into the game not simply because they want to play but because they feel an obligation to the other players. Participants often network outside the game space to coordinate their efforts and soon find themselves discussing a much broader range of topics (much like Putnam's bowlers). Participants develop and deploy tools which allow them to manage complex data sets and monitor their own performances. And the guild leadership, many of whom are still in their teens, learn to deal with their team member's complex motivations and sometimes conflicting personalities.

Whatever these folks are doing, they are not "bowling alone." If Putnam's correct, bowling was more than a game for post-war citizens, and World of Warcraft is more than a game for many students in your classrooms.

But let's take it a step further. Game guilds and other kinds of social networks are as central to what we mean by civic engagement in the 21st century as civic organizations were to the community life of the 20th century. If bowling helped connect citizens at the geographically local level, these new kinds of communities bring people together from diverse backgrounds, including adults and youths, and across geographically dispersed communities. Such dispersed social ties are valuable in a world where the average American moves once every four or five years, often across regions, and where many of us find ourselves needing to interact with colleagues around the planet.

I use the term "participatory culture" to describe the new kinds of social and creative activities which have emerged in a networked society. A participatory culture is a culture with relatively low barriers to artistic expression and civic engagement, strong support for creating and sharing one's creations, and some type of informal mentorship whereby what is known by the most experienced is passed along to novices. A participatory culture is also one in which members believe their contributions matter, and feel some degree of social connection with one another. Participatory culture shifts the focus of literacy from one of individual expression to community involvement.

The work we are doing through the MacArthur Foundation's emerging Digital Media and Learning Initiative, a network of scholars, educators, and activists , starts from the premise that these new media platforms represent important sites of informal learning. The time young people spend, outside the classroom, engaging with these new forms of cultural experience foster real benefits in terms of their mastering of core social skills and cultural competencies (the New Media Literacies) they are going to be deploying for years to come. While much has been said about why 21st century skills are essential for the contemporary workplace, they are also valuable in preparing young people for future roles in the arts, politics, and community life. Learning how to navigate social networks or produce media may result in a sense of greater personal empowerment across all aspects of youth's lives.

In a recent report, documenting a multi-year, multi-site ethnographic study of young people's lives on and off line, the Digital Youth Project suggests three potential modes of engagement which shape young people's participation in these online communities. First, many young people go on line to "hang out" with friends they already know from schools and their neighborhoods. Second, they may "mess around" with programs, tools, and platforms, just to see what they can do. And third, they may "geek out" as fans, bloggers, and gamers, digging deep into an area of intense interest to them, moving beyond their local community to connect with others who share their passions. The Digital Youth Project argues that each of these modes encourages young people to master core technical competencies, yet they may also do some of the things that Putnam ascribed to the bowling leagues of the 1950s -- they strengthen social bonds, they create shared experiences, they encourage conversations, and they provide a starting point for other civic activities.

For the past few decades, we've increasingly talked about those people who have been most invested in public policy as "wonks," a term implying that our civic and political life has increasingly been left to the experts, something to be discussed in specialized language. When a policy wonk speaks, most of us come away very impressed by how much the wonk knows but also a little bit depressed about how little we know. It's a language which encourages us to entrust more control over our lives to Big Brother and Sister, but which has turned many of us off to the idea of getting involved. But what if more of us had the chance to "geek out" about politics? What if we could create points of entry where young people saw the affairs of government as vitally linked to the practices of their everyday lives? "Geeking out" is empowering; it motivates our participation and in a world of social networks, pushes us to find others who share our passions. If being a "wonk" is about what you know, being a "geek" involves an ongoing process of sharing information and working through problems with others. Being a political "geek" involves taking on greater responsibility for solving your own problems, working as a member of a larger community, whether one defined in geographic terms or through shared interests.

Maybe "geeking out" about politics is key to fostering a more participatory democracy, one whose success is measured not simply by increases in voting (which we've started to see over the past few election cycles) but also increased volunteerism (which shows up in survey after survey of younger Americans), increased awareness of current events, increased responsibility for each other, and increased participation in public debates about the directions our society is taking. "Geeking out" might mean we think about civic engagement as a life style rather than as a special event.

We still have a lot to learn about how someone moves from involvement in participatory culture towards greater engagement with participatory democracy. But so far, there are some promising results when organizations seek to mobilize our emerging roles as fans, bloggers, and gamers. Consider, for example, the case of the HP Alliance, an organization created by Andrew Slack, a 20-something activist and stand up comic, who saw the Harry Potter books as potential resources for mobilizing young people to make a difference in the world. Slack argues that J.K. Rowling's novels have taught a generation to read and write (through fan fiction) and now it has the potential to help many of those young people cross-over into participation in the public sphere. Creating what he describes as "Dumbledore's Army" for the real world, the HP Alliance uses the story of a young man who questioned authority, organized his classmates, and battled evil to get young people connected with a range of human rights organization. Slack works closely with Wizard Rock bands, who perform at fan conventions, record their music as mp3s, and distribute it via social network sites and podcasts. He works with the people who run Harry Potter fan websites and blogs to help spread the word to the larger fan community. So far, the HP Alliance has moved more than 100,000 people, many of them teens, to contribute to the struggles against genocide in Darfur or the battles for worker's rights at Wal-Mart or the campaign against Proposition 8 in California.

Many parents and educators grumble about this generation's lack of motivation or commitment, describing them as too busy playing computer games to get involved in their communities. For some teens, this may be sadly true. But, Global Kids, a New York organization, has been using Second Life to bring together youth leaders from around the world and to give them a playground through which they can imagine and stage solutions to real world problems. Global Kids, for example, used machinima -- a practice by which game engines are deployed to create real time digital animation -- to document the story of a child soldier in Uganda and circulate it via YouTube and other platforms to call attention to the plight of youth in the developing world. Much like the HP Alliance, Global Kids is modeling ways we can bridge between participatory culture and participatory democracy.

A New "Platform" for Games Research?: An Interview with Ian Bogost and Nick Montfort (Part Two)

Henry: Does Platform Studies necessarily limit the field to writers who can combine technological and cultural expertise, a rare mix given the long-standing separation between C.P. Snow's "Two Cultures"? Or should we imagine future books as emerging through collaborations between writers with different kinds of expertise?

Nick: We definitely will encourage collaborations of this sort, and we know that collaborators will need all the encouragement they can get. It's unusual and difficult for humanists to collaborate. When the technical and cultural analysis that you need to do is demanding, though, as it is in a platform study, it's great to have a partner working with you.

Personally, I prefer for my literary and research collaborations to be with similar "cross-cultural" people, such as Ian; I don't go looking for a collaborator to balance me by knowing about all of the technical matters or all of the cultural and humanistic ones. It is possible for collaborators on one side to cross the divide and find others, though. Single-authored books are fine as well, and it's okay with me if the single author leans toward one "culture" or the other, or even if the author isn't an academic.

Ian: I also think that this two culture problem is resolving itself to some extent. When I look at my students, I see a very different cohort than were my colleagues in graduate school. I see a fluency in matters of technology and culture that defies the expectations of individual fields. So in some ways, I see the Platform Studies series as an opportunity for this next generation of scholars as much as it is for the current one, perhaps even more so.

When you think about it, popular culture in general is also getting over the two culture problem. There are millions of people out there who know something about programming computers. As I've watched the press and the public react to Racing the Beam, it's clear to me that discussions of hardware design and game programming are actually quite welcome among a general readership.

Henry: What relationship do you see between "platform studies" and the "science, technology and society" field?

Nick: A productive one. We're very much hoping that people in STS will be interested in doing platform studies and in writing books in the series. Books in the series could, of course, make important contributions in STS as well as in digital media.

Ian: Indeed, STS already tends strongly toward the study of how science and technology underlies things. Platform studies has something in common with STS in this regard. But STS tends to focus on science's impact on politics and human culture rather than human creativity. This latter area has typically been the domain of the humanities and liberal arts. One way to understand platform studies is as a kind of membrane between computing, STS, and the humanities. We think there's plenty of productive work to be done when these fields come together.

Henry: Why did you decide to focus on the Atari Video Computer System as the central case study for this book?

Ian: We love the Atari VCS. It's a platform we remember playing games on and still do. In fact, the very idea for platform studies came out of conversations Nick and I had about the Atari. We found ourselves realizing that a programmer's negotiation between platform and creativity takes place in every kind of creative computing application.

Nick: Another factor was historical. While contributing to the cultural understanding of video games a great deal, game studies hasn't looked to its roots enough. A console as influential as the Atari VCS deserved scholarly and popular attention beyond mere retro nostalgia. We wanted to bring that sort of analysis to bear.

Ian: Finally, I've been using the Atari VCS for several years now in my classes, both as an example and as an exercise. I have my Introduction to Computational Media class program small games on the system as an exercise in constraint. I also taught a graduate seminar entirely devoted to the system. Moreover, I often make new games for the system, some of which I'll be releasing this spring. So overall, the Atari VCS is a system that has been and remains at the forefront of both of our creative and critical interests.

In fact, I've continued to do platform studies research on the Atari VCS beyond the book. A group of computer science capstone students under my direction just completed a wonderful update to the "Stella" Atari VCS emulator, adding effects to simulate the CRT television. These include color bleed, screen texture, afterimage -- all matters we discuss in the book. I have a webpage describing the project at http://www.bogost.com/games/a_television_simulator.shtml.

Henry: You focus the book around case studies of a number of specific Atari titles from Adventure and Pac-Man to Star Wars: The Empire Strikes Back. Can you say more about how these examples allowed you to map out the cultural impact and technical capacities of the Atari system?

Nick: The specific examples gave us the opportunity do what you can do with close readings: drill down into particular elements and see how they relate to a game, a platform and a culture. But we wouldn't have found the same insights if we had just picked a game, or six games from different platforms, and got to work. We used these games to see how programmers' understanding of the platform developed and how the situation of computer gaming changed, how people challenged and expanded the 1977 idea of gaming that was frozen into the Atari VCS when they put this wonderful machine together.

Ian: We also chose to focus on a specific period, the early years of the Atari VCS, so to speak, from 1977 to 1983. These games in particular allowed us to characterize that period, as programmers moved from their original understanding of this system -- one based on porting a few popular coin-op games -- to totally different and surprising ways of making games on it.

Henry: Platform Studies seems to align closely with other formalist approaches to games. Can it also be linked to cultural interpretation?

Nick: Formalist? Really? We were indeed very concerned with form and function in Racing the Beam, so I won't shun the label, but we tried to be equally attentive to the material situation of the Atari VCS and the cartridges and arcade games we discussed. For instance, we included an image of the Shark Jaws cabinet art so that the reader could look at the typography and decide whether Atari was attempting to refer to Speilberg's movie. We discuss the ramifications of using a cheaper cartridge interface in the VCS design, one that was missing a wire.

Ian: We should also remember the technical creativity that went into designing a system like the Atari VCS, or into programming games for it. The design of the graphics chip, for example, was motivated by a particular understanding of what it meant to play a game: two human players, side by side, each controlling a character on one side of the screen or another.

By the time David Crane created Pitfall! many years later, those understandings had changed. Pitfall! is a one-player game with a twenty minute clock. But it's also a wonderful mash-up of cultural influences: Tarzan, Indiana Jones, Heckle and Jeckle.

Nick: I'll admit that ours is a detailed analysis that focused on specifics (formal, material, technical) rather than being based around broad cultural questions: it's bottom-up rather than top-down. We're still trying to connect the specifics of the Atari VCS (and other platforms) to culture, though. The project is not only linked with, but part of, cultural interpretation.

Ian: I'd go even further; there's nothing particularly formalist about a platform studies approach, if formalism means a preference of material and structure over cultural reception and meaning. If anything, I think our approach offers a fusion of many influences, rather than an obstinate grip on a single one.

Henry: There is still a retro-gaming community which is deeply invested in some of these games. Why do you think these early titles still command such affection and nostalgia?

Ian: Some of the appeal is related to fond memories and retro-nostalgia, certainly. Millions of people had Ataris and enjoyed playing them. Just as the case with the Apple ][ or the Commodore 64 may have introduced someone to computing, so the Atari VCS might have introduced him or her to videogaming. So part of the appeal of returning to these games is one of returning to the roots of a pleasurable pastime.

Nick: That said, we resist appeals to nostalgia in the book and our discussions about it, not because nostalgia and retro aesthetics are bad, but because it would be a shame if people thought you could only look back at video games to be nostalgic. There are reasons for retro-gaming that go beyond nostalgia, too. It's driven, in part, by the appeal of elegance, by a desire to explore the contours of computing history with an awareness of what games are like now, and by the ability of systems like the Atari VCS to just be beautiful and produce really aesthetically powerful images and compelling gameplay.

Ian: It's also worth noting that there is a thriving community interested in new Atari games, many of whom congregate on the forums at AtariAge.com. For these fans and hobbyist creators, the Atari is a living platform, one that still has secrets left to reveal. So the machine can offer interest beyond retro-gaming as well.

Henry: What factors contributed to the decline of the Atari empire? How did that decline impact the future of the games industry and of game technology?

Nick: I think it takes a whole book on the complex corporate history of Atari to even start answering this question. Our book is focused on the platform rather than the company. Scott Cohen's Zap!: The Rise and Fall of Atari is a book about the company, and my feeling is that even that one doesn't really answer that question entirely. We're hoping that there will be more books on Atari overall before too long.

Ian: There are some reasons for Atari's decline that are connected specifically to the Atari VCS platform, though. It turned out to be incredibly flexible and productive, to support more types of game experience than its creators ever could have imagined. No doubt, Atari never imagined that third-party companies such as Activision would come along and make literally hundreds of games for the system by 1983, cutting in on their business model right at the most profitable point. But the system was flexible enough for that to happen, too.

Nick: That's why Nintendo did everything they could, by license and through technical means, to lock down the NES and to prevent this sort of thing from happening with it. The industry has been like that ever since.

Ian: As we point out in the book, this was a bittersweet solution. Nintendo cauterized the wound of retailer reticence, but it also introduced a walled garden. Nintendo (and later Sony and Microsoft) would get to decide what types of games were "valid" for distribution. Before 1983, the variety of games on the market was astounding. So, on the one hand, we're still trying to recover from the setback that was first-party licensing. But on the other hand, we might not have a games industry if it wasn't for Nintendo's adoption of that strategy.

Henry: Can you give us a sense of the future of the Platform Studies project? What other writers and topics can we expect to see? Are you still looking for contributors?

Nick: Yes, we're definitely looking for contributors, although we're pleased with the response we've had so far. We expect a variety of platforms to be covered -- not only game systems, but famous early individual computers, home computers from the 1980s, and software platforms such as Java. Some families of platforms will be discussed in books, for instance, arcade system boards. And although every book will focus on the platform level, we anticipate a wide variety of different methods and approaches to platforms. While getting into the specifics of a platform and how it works, people may use many different methodologies: sociological, psychoanalytic, ethnographic, or economic, for example.

Ian: In terms of specific projects, we have a number of proposals in various stages of completeness and review. It's probably a bit early to talk about them specifically, but I can say that all of the types of platforms Nick just mentioned are represented.

There are a few different types of book series; some offer another venue for work that is already being done, while others invite and maybe even encourage a new type of work to be done. I suspect that Platform Studies is of the latter sort, and we're gratified to see authors thinking of new projects they didn't even realize they wanted to pursue.

Henry: You both teach games studies within humanities studies in major technical institutions. How do the contexts in which you are working impact the approach you are taking here?

Ian: Certainly both Georgia Tech and MIT make positive assumptions about the importance of matters technical. Humanities and social science scholarship at our institutions thus often take up science and technology without having to justify the idea that such topics are valid objects of study.

Nick: I have to agree -- it's very nice that I don't have to go around MIT explaining why it's legitimate to study a computing system or that video games and digital creativity are an important part of culture.

Ian: Additionally, at Georgia Tech we have strong relationships between the college of liberal arts, the college of engineering, and the college of computing. I have many colleagues in these fields with whom I speak regularly. I have cross-listed my courses in their departments. We even have an undergraduate degree that is co-administered by liberal arts and computing. So there's already an ecosystem that cultures the technical pursuit of the humanities, and vice versa.

I also think technical institutes tend to favor intellectual experimentation in general. We often hear cliches about the "entrepreneurial" environment at technical institutes, a reference to their tendency to encourage the commercial realization of research. But that spirit also extends to the world of ideas, and scholars at a place like Georgia Tech are perhaps less likely to be criticized, ostracized, or denied tenure for pursuing unusual if forward-thinking research.

Dr. Ian Bogost is a videogame designer, critic, and researcher. He is Associate Professor at the Georgia Institute of Technology and Founding Partner at Persuasive Games LLC. His research and writing considers videogames as an expressive medium, and his creative practice focuses on games about social and political issues. Bogost is author of Unit Operations: An Approach to Videogame Criticism (MIT Press 2006), of Persuasive Games: The Expressive Power of Videogames (MIT Press 2007), and co-author (with Nick Montfort) of Racing the Beam: The Atari Video Computer System (MIT Press 2009). Bogost's videogames about social and political issues cover topics as varied as airport security, disaffected workers, the petroleum industry, suburban errands, and tort reform. His games have been played by millions of people and exhibited internationally.

Nick Montfort is assistant professor of digital media at the Massachusetts Institute of Technology. Montfort has collaborated on the blog Grand Text Auto, the sticker novel Implementation, and 2002: A Palindrome Story. He writes poems, text generators, and interactive fiction such as Book and Volume and Ad Verbum. Most recently, he and Ian Bogost wrote Racing the Beam: The Atari Video Computer System (MIT Press, 2009). Montfort also wrote Twisty Little Passages: An Approach to Interactive Fiction (MIT Press, 2003) and co-edited The Electronic Literature Collection Volume 1 (ELO, 2006) and The New Media Reader (MIT Press, 2003).

A New "Platform" for Games Research?: An Interview with Ian Bogost and Nick Montfort (Part One)

Any time two of the leading video and computer game scholars -- Ian Bogost (Georgia Tech) and Nick Montfort (MIT) -- join forces to write a book, that's a significant event in my book. When the two of them lay down what amounts to a new paradigm for game studies as a field -- what they are calling "Platform Studies" -- and apply it systematically -- in this case, to the Atari system -- this is something which demands close attention to anyone interested in digital media. So, let me urge you to check out Racing the Beam: The Atari Video Computer System, released earlier this spring by MIT Press. In the interview that follows you will get a good sense of what the fuss is all about as the dynamic duo lay out their ideas for the future of games studies, essentially further raising the ante for anyone who wants to do serious work in the field. As someone who would fall far short of their ambitious bar for the ideal games scholar, I read this discussion with profoudly mixed feelings. I can't argue with their core claim that the field will benefit from the arrival of a generation of games scholars who know the underlying technologies -- the game systems -- as well as they know the games. I certainly believe that the opening up of a new paradigm in games studies will only benefit those of us who work with a range of other related methodologies. If I worry, it is because games studies as a field has moved forward through a series of all-or-nothing propositions: either you do this or you aren't really doing game studies. And my own sense is that fields of research grow best when they are expansive, sucking in everything in their path, and sorting out the pieces later.

That said, I have no reservations about what the authors accomplish in this rigorous, engaging, and ground-breaking book. However you think of games studies as an area of research, there will be things in this book which will provoke you and where Bogost and Montfort are concerned, I wouldn't have it any other way.

Henry: Racing the Beam represents the launch of a new publishing series based on what you are calling "Platform Studies." What is platform studies and why do you think it is an important new direction for games research?

Nick: Platform studies is an invitation to look at the lowest level of digital media -- the computing systems on which many sorts of programs run, including games. And specifically, it's an invitation to consider how those computing systems function in technical detail, how they constrain and enable creative production, and how they relate to culture.

Ian: It's important to note that platform studies isn't a particular approach; you can be more formalist or materialist, more anthropological or more of a computer scientist, in terms of how you consider a platform. No matter the case, you'll still be doing platform studies, as long as you consider the platform deeply. And, while platform studies is of great relevance to the study of video games, these studies can also be used to better understand digital art, electronic literature, and other sorts of computational cultural production that happens on the computer.

Nick: In games research in particular, the platform seems to have a much lower profile as we approach 2010 than it did in the late 1970s and 1980s. Games are developed for both PC and Xbox 360 fairly easily, and few scholars even bother to specify which version of a such game they're writing about, despite differences in interface, in how these games are burdened with DRM, and in the contexts of play (to name just a few factors). At the same time, there are these recent platforms that feature unusual interfaces and limited computational power, relative to the big iron consoles: Nintendo's Wii and DS and Apple's iPhone.

Ian: And let's not forget that games are being made in Flash and for other mobile phones. Now, developers are very acutely aware of what these platforms can do and of how important it is to consider the platform level. But their implicit understanding doesn't always make it into wider discussions, and that understanding doesn't always connect to cultural concerns and to the history of gaming and digital media.

Nick: So, we think that by looking thoroughly at platforms, we will, first, understand more about game consoles and other game platforms, and will be able to both make better use of the ones we have (by creating games that work well with platforms) and also develop better ones. Beyond that, we should be able to work toward a better understanding of the creative process and the contexts of creativity in gaming and digital media.

Henry: What do you think has been lost in game studies as a result of a lack of attention to the core underlying technologies behind different game systems?

Nick: For one thing, there are particular things about how games function, about the interfaces they present, and about how they appear visually and how they sound which make no sense (or which can be attributed to causes that aren't really plausible) unless you make the connection to platform. You can see these in every chapter of Racing the Beam and probably in every interesting Atari VCS game.

Ian: And more simply put, video games are computational media. They are played on computers, often very weird computers designed only to play video games. Isn't it reasonable to think that observing something about these computers, and the relationship between each of them and the games that they hosted, would lead to insights into the structure, meaning, or cultural significance of such works?

Here's an example from the book: the graphical adventure genre, represented by games like The Legend of Zelda, emerged from Warren Robinett's attempts to translate the text-based adventure game Colossal Cave onto the Atari VCS. The machine couldn't display text, of course, so Robinett chose to condense the many actions one can express with language into a few verbs that could be represented by movement and collision detection. The result laid the groundwork for a popular genre of games, and it was inspired largely by the way one person negotiated the native abilities of two very different computers.

Nick: More generally, the platform is a frozen concept of what gaming should be like: Should it come in a fake wood-grain box that looks like a stereo cabinet and fits in the living room along stereo components? Should it have two different pairs of controllers and difficulty switches so that younger and older siblings can play together with a handicap? Only if we look at the platform can we understand these concepts, and then go on to understand how the course of game development and specific games negotiate with the platform's concept.

Henry: Early on, there were debates about whether one needed to be a "gamer" to be able to contribute to games studies. Are we now facing a debate about whether you can study games if you can't read code or understand the technical schematics of a game system?

Nick: All sorts of people using all sorts of methods can make and have made contributions to game studies, and that includes non-ethnographers, non-lawyers, non-narratologists, and those without film studies backgrounds as well as people who can't read code or understand schematics. Games are a tremendous phenomenon, and it would be impossible for someone to have every skill and bit of background relevant to studying them. We're lucky that many different sorts of people are looking at games from so many perspectives.

That said, whether one identifies as a "gamer" is a rather different sort of issue than whether one understands how computational systems work. If your concern is for people's experience of the game -- how they play it, what meaning they assign to it, and how the experience relates to other game experiences -- then the methods that are most important to you will be the ones related to understanding players or interpreting the game yourself. But if you care about how games are made or how they work, it makes a lot of sense to know how to program (and how to understand programs) and to have learned at least the bare outlines of computer architecture.

Ian: Even if you want to thoroughly study something non-interactive, like cutscenes, won't you have to understand both codecs and the specifics of 3D graphics (ray tracing, texture mapping, etc.) to understand why certain choices were made in creating a cutscene? How can you really understand Geometry Wars without getting into the fact that vector graphics display hardware used to exist, and that the game is an attempt to recreate the appearance of those graphics on today's flat-panel raster displays? How could you begin to talk about the difference between two radically different and culturally relevant chess programs, Video Chess for the Atari VCS (which fit in 4K) and the world-dominating Deep Blue, without considering their underlying technical differences -- and going beyond noticing that one is enormously powerful and other minimal?

Nick: I certainly don't want to ban anyone from the field for not knowing about computing systems, but I also think it would be a disservice to give out game studies or digital media degrees at this point and not have this sort of essential technical background be part of the curriculum.

Dr. Ian Bogost is a videogame designer, critic, and researcher. He is Associate Professor at the Georgia Institute of Technology and Founding Partner at Persuasive Games LLC. His research and writing considers videogames as an expressive medium, and his creative practice focuses on games about social and political issues. Bogost is author of Unit Operations: An Approach to Videogame Criticism (MIT Press 2006), of Persuasive Games: The Expressive Power of Videogames (MIT Press 2007), and co-author (with Nick Montfort) of Racing the Beam: The Atari Video Computer System (MIT Press 2009). Bogost's videogames about social and political issues cover topics as varied as airport security, disaffected workers, the petroleum industry, suburban errands, and tort reform. His games have been played by millions of people and exhibited internationally.

Nick Montfort is assistant professor of digital media at the Massachusetts Institute of Technology. Montfort has collaborated on the blog Grand Text Auto, the sticker novel Implementation, and 2002: A Palindrome Story. He writes poems, text generators, and interactive fiction such as Book and Volume and Ad Verbum. Most recently, he and Ian Bogost wrote Racing the Beam: The Atari Video Computer System (MIT Press, 2009). Montfort also wrote Twisty Little Passages: An Approach to Interactive Fiction (MIT Press, 2003) and co-edited The Electronic Literature Collection Volume 1 (ELO, 2006) and The New Media Reader (MIT Press, 2003).

How Susan Spread and What It Means

I've done four interviews over the past few days -- with the Washington Post, the Boston Globe, the Philadelphia Inquirer, and The Mainichi Shimbun (Japan) -- which in one way or another have touched on the dramatic story of Susan Boyle, the dowdy and musically gifted contestant on Britain's Got Talent who has become the new queen of both broadcast and participatory media.

What I've been telling all of them is that Boyle's success is perhaps the most spectacular example to date of spreadability in action, and indeed, since we've discovered a fair number of busy corporate types out there who don't feel like reading the eight installments of "If It Doesn't Spread, It's Dead," I figured I'd use this space to spell out again some core principles of spreadable media and show how the Boyle phenomenon illustrates how they work.

The statistics are moving so fast that it is impossible to keep track of them but here's the basic data points as reported on Monday by the Washington Post:

According to Visible Measures, which tracks videos from YouTube, MySpace and other video-sharing sites, all Boyle-oriented videos -- including clips of her television interviews and her recently released rendition of "Cry Me a River," recorded 10 years ago for a charity CD -- have generated a total of 85.2 million views. Nearly 20 million of those views came overnight.

The seven-minute video that was first posted on YouTube and then widely circulated online easily eclipsed more high-profile videos that have been around for months. Tina Fey's impersonation of Sarah Palin has clocked in 34.2 million views, said the folks at Visible Measures, while President Obama's victory speech on election night has generated 18.5 million views.

But it's not just in online video where Boyle, the unassuming woman from a tiny Scottish town, has dominated. Her Wikipedia entry has attracted nearly 500,000 page views since it was created last Sunday. Over the weekend, her Facebook fan page was flooded with comments, at some points adding hundreds of new members every few minutes. The page listed 150,000 members at 1 p.m. Friday. By last night there were more than a million.

By comparison, the 2008 Season finale for American Idol, one of the highest rated programs on American broadcast television, attracted almost 32 million viewers, or between a third and a half the number of people who had watched Susan's video as of Monday of this week. So, what's happening here?

Contrary to what you may have read, Susan Boyle didn't go "viral." She hasn't gained circulation through infection and contagion. The difference between "viral" and "spreadable" media has to do with the conscious agency of the consumers. In the viral model, nobody is in control. Things just go "viral." In the Spreadability model, things spread because people choose to spread them and we need to understand what motivates their decision and what facilitates the circulation.

While she originated on British broadcast television, her entry into the American market was shaped more by the conscious decisions of 87 plus million people who choose to pass her video along to friends, families, work mates, and fellow fans than by any decision by network executives to put her on the airwaves in the first place.

This is not to say that the original video was not professionally produced and edited in such a way as to maximize the emotional impact of what happened to her at that particular talent composition. This is not to say that our interest in the content wasn't shaped by our general familarity with the genre conventions of reality television (leading us to expect another William Hung kind of moment) or by our particular perceptions and investments in one Simon Cowell, whose boyish grin and sheepish expression represents the ultimate payoff for her spectacular performance (which we can appreciate because we've seen American Idol and know what a tough-minded SOB Simon can be). And that's not to say that the visibility of Susan Boyle hasn't been amplified as she's gotten interviewed on Good Morning America and spoofed on the Tonight Show, to cite two examples. We have to understand the Susan Boyle phenomenon as occurring at the intersection between broadcast media (or to use Amanda Lotz's term, television in the post-network era.) In other words, this is convergence culture at work.

The Susan Boyle phenomenon would not have played out the same way if there wasn't YouTube, if there weren't social networks, if there weren't Twitter. Indeed, the very similar video of Paul Potts making a similarly surprising success on the same program generated nowhere near the same level of circulation a year ago (though it may have also prepared the way for the public's interest in this story). What allowed the Susan Boyle video to travel so far so fast was that it could travel so far so fast.

For most of the people who saw it and decided to pass it along, they had a sense of discovery. They could anticipate that they were sharing the video with people who probably hadn't seen it already, precisely because the content was not yet being broadcast on commercial television. The fans found Susan Boyle before the networks did -- much like that old saw that by the time a trend makes it to the cover of Time Magazine, it's already over. There was an infrastructure in place -- across multiple communication systems -- which would allow anyone to share this content with anyone else who they thought would like to see it with minimal effort. We can send links. We can embed the content in our blogs.

The role of Twitter in all of this is most interesting. Twitter Twits did what Twitter Twits do best -- they tweeted alerts about an interesting bit of content and were able to embed micro-links so their followers could quickly access the content. I think of Twitter as like a swarm of bees that spread out in all directions, searching for interesting materials to share. When someone finds it, they come back to the hive, do a little honey dance, and send the swarm scampering behind them. This is how collective intelligence outsmarts the broadcast decision-makers: The Twitter Tribes can figure out what content the audience wants to see because the Twitter Tribes are the audience, making decisions in real time.

Equally important is that we had the agency to decide which content we wanted to pass along -- out of all of the possible video clips posted on YouTube last week or indeed, out of all of the many segments of media content which are circulating around us.

We believe that we can only understand what happened here by identify the choices which consumers made as they decided to pass along this content and not that content. The USA Today on Monday sought to identify a range of different motives which shaped the decisions to pass along this particular content: "Vindication . . . Surprise . . . Guilt . . . Shame . . . Psychology . . . Hope . . . Distraction . . . Empowerment . . . Authenticity . . . Spiritual Solace."

There's no need to identify a single cause for why people spread this content. Different people spread this content for different reasons. Hell, often, the same person spreads this content for different reasons. I sent the link via e-mail to my wife with a note saying "want to feel warm and fuzzy," to a close friend with a note suggesting "this will crack you up," and to my Twitter and Facebook mobs with the suggestion it illustrates something important about reality television because you wouldn't believe this if you saw it in a movie. My sharing of the video meant something different in each of these relationships. We can certainly identify a range of common reasons for why the emotional structure of this video might motivate people to circulate it.

Does the wide-spread circulation of reality television suggest the triviality of what constitutes public interests? I don't think we can answer that question without knowing what we are using Susan Boyle to talk about. Her meaning doesn't reside in the video itself -- we won't exhaust it no matter how many times with watch it. The meaning rests in the conversations that Susan Boyle enables us to have with each other. As it starts to circulate, the Susan Boyle video gets inserted into all kinds of ongoing conversations across a range of different communities, so that I've stumbled into prayer circles for Susan Boyle; I've found scientists talking about how someone with that body could produce such a sound; I've seen discussions amongst Karaoki singers about her techniques, and I've seen reality television fans trying to explain why her success would never be possible given the rules of American Idol which exclude someone her age from competing in the first place. Susan Boyle circulates because she's meaningful on many different levels and after a while, all of this has started to go meta so that we are spreading Susan's videos to talk about how fast they are being spread.

For many of the people who are spreading her videos, the transaction is understood through the lens of a gift economy. We share her because she allows us to make someone we care about have a somewhat better day. We share her because of what she allows us to say about ourselves, our world, and our relationships. I sent Susan to my wife as something like a Facebook Gift -- a short, quick, friendly gesture on a day when we weren't going to see each other until much later.

Yes, there were other groups who had other motives for getting me to pass along the content -- the producers of the programme and the network on which it aired, perhaps YouTube itself -- but their motives had very little to do with why I chose to share that video with people I cared about. So my circulation of the video needed to be negotiated between their interests and mine.

The fact that YouTube makes it easy to embed the content makes it easier for me to share it. The fact that Bit.ly allows me to reduce the length of the url allows me to tweet about it. And all of these technical innovations makes it that much easier for the video to spread, but at the end of the day, it also spreads because I and all the rest of us have become more literate about social networking, because we are linked to more people and have more regular contact with them, because we now often interact with each other through sharing meaningful bits of media content.

Keep in mind a fundamental fact: many of the 97 plus million people who downloaded the video are part of a surplus audience from the perspective of the people who produced and marketed Britain's Got Talent. Indeed, beyond a certain point, Susan Boyle's rapid visibility becomes a liability rather than an asset. Keep in mind that Boyle stars in a British program which does not get commercial distribution in the United States. I can't turn on a television network -- cable or broadcast -- and watch the next installment of Britain's Got Talent. I can't go on Hulu and download that content. And I can't at present go on iTunes and buy this content. Market demand is dramatically outpacing supply.

What I can do, though, is consume illegal downloads of the series via various torrents or fan distribution sites, which have the flexibility to get the content into circulation without having to negotiate international deals or work through protectionist policies which make it hard to bring international content into the American market. Even with Cowell's production company already having working relations with multiple American networks, my bet is that he can't get that show on the air quickly enough for Americans to be able to catch up with the Brits.

Sure, Simon Cowell has already signed her to a contract and talks about how ""there's every chance Susan Boyle will have the number one album in America" if she appears on Oprah . But the record can't go on sale fast enough to capitalize on this burst of public interest and by the time it reaches the market, there's a good chance that her 15 minutes of fame will have expired.

Wired tells us that even where the media producers might have made money from the spread of Sarah's video, they are so far choosing not to do so: "a Google spokeswoman responded to our e-mail and phone queries with some surprising news: "That video is not being monetized." We've contacted Sony (Simon Cowell's label) and FremantleMedia (the show's producer, owned by RTL Group not Sony as appeared in this update earlier) to try to determine why the $500,000 or more Boyle's video should have generated so far is apparently being left on the table -- despite the fact that both companies are confirmed revenue-sharing partners of YouTube." So, whatever calculations have gone into getting us to help spread this video, they don't make sense in terms of a simple and direct economic equation. This isn't about counting impressions and raking in the cash.

Keep in mind that what we've seen so far is her first appearance in a season long competition and the implication of this blockage becomes clear. I've argued here that piracy often reflects market failures on the part of producers rather than moral failures on the part of consumers. It isn't that people will turn to illegal downloads because they want the content for free. My bet is that many of them would pay for this content but it is not legally being offered to them. We can compare this to the global interest generated by Ken Jenning's phenomenal run on Jeopardy: Jeopardy was already syndicated in markets around the world so when he generated buzz, he drew people back to the local broadcaster who was selling the content in their markets. They could tune in and see day by day whether he stayed in the game. Right now, everyone's still acting as if Susan Boyle was only one video but they will wake up tomorrow or the next day and discover that lots of those people want to see what happens to her next.

When many of us write about the global circulation of media, the American circulation of British reality television isn't necessarily what comes first to mind. Indeed, there's some kind of mental block in terms of understanding this content as international in the first place. Yet, there is already a strong fan base in the United States for British media content which had already been downloading and circulating Britain's Got Talent, even though no commercial producer had guessed that this series might generate this kind of American interest. And that fan base is now in a position where they may need to service Susan's growing audience.

Part of the reasons Americans like Susan Boyle is that she's so damned British. USA Today says her story is like "a Disney movie," but it isn't: it's like a British movie, like Calendar Girls or Billy Elliot or The Full Monty, one of those down to earth dramas where average Brits cut across class and taste boundaries and do something extraordinary. The mixture of gritty realism, portly stars, eccentricity, class consciousness and wild-eyed optimism is what draws many of us to British media in the first place.

We are used to talking about things that could only happen in America. Well, Susan Boyle is something that could only happen in Great Britain -- get used to it because the next one will be something that can only happen in India or Japan. When we talk about pop cosmopolitanism, we are most often talking about American teens doing cosplay or listening to K-Pop albums, not church ladies gathering to pray for the success of a British reality television contestant, but it is all part of the same process. We are reaching across borders in search of content, zones which were used to organize the distribution of content in the Broadcast era, but which are much more fluid in an age of participatory culture and social networks.

We live in a world where content can be accessed quickly from any part of the world assuming it somehow reaches our radar and where the collective intelligence of the participatory culture can identify content and spread the word rapidly when needed. Susan Boyle in that sense is a sign of bigger things to come -- content which wasn't designed for our market, content which wasn't timed for such rapid global circulation, gaining much greater visibility than ever before and networks and production companies having trouble keeping up with the rapidly escalating demand.

And as we discover we like someone like Susan Boyle, we seek out more information. Suddenly charity records she made years ago spring up videos on YouTube. Suddenly there's a flood of interest on Wikipedia about this previously unknown figure. And people are seeking out videos of Elaine Paige, the queen of British stage musicals, who Susan identified as her role model. Many Americans had never heard of Paige before so we can chart dramatic increases in downloads on her videos though they are dwarfed by the Susan Boyle original. Most of the thousands of comments posted on the Paige videos make unfortunate comparisons with Susan Boyle, suggesting that even though she has been a much bigger star historically, has a string of commercial successes, that for this week at least, Susan Boyle's got a more dedicated fan base. Just to give us a baseline, some of the Elaine Paige YouTube videos reach more than a million viewers, where-as the rest don't get over 100,000. My theory is that Susan Boyle's fan base have discovered some of them and not others, accounting for the huge gap in traffic.

Or consider the fact that Susan Boyle gained more than a million Facebook subscribers in less than a week at a time when Oprah and Ashton Kutcher have been battling it out to see who could be the first to get a million subscribers on Twitter. (Yes, Facebook has a much larger user base than Twitter but it's still an impressive accomplishment!) This is not to say that long-term Oprah could help Susan Boyle open up her record to a much larger audience, just that in this frenzy of interest, she doesn't need Oprah or any other old style broadcast celebrity to turn YouTube on its ear.

So, that's what Susan Boyle can teach us about Spreadability. So what happens next? Talk among yourselves. And while you are at it, spread the word.

Babylon 5's JMS Heads to MIT -- Buy Your Tickets Online

The annual Julius Schwartz Lecture, being held at MIT on May 22nd, now has tickets available for sale online. This year's speaker is J. Michael Straczynski (AKA JMS), best known for his role as the creator of the cult science fiction serial Babylon 5 and its various spin-off films and series. Straczynski wrote 92 out of the 110 Babylon 5 episodes, notably including an unbroken 59-episode run through all of the third and fourth seasons, and all but one episode of the fifth season. His television writing career spans from work on He-Man, She-Ra, and Real Ghostbusters through to The New Twilight Zone and Murder She Wrote. He followed up Babylon 5 with another really solid science fiction series, Jeremiah. In more recent years, he's enjoyed success as a screenwriter, most recently writing the script for The Changling, Clint Eastwood's period drama, and as a comic book writer, who both works on established superhero franchises, such as Spider-Man, Supreme Powers, Fantastic Four, and Thor, and creates his own original series, such as Rising Stars, Midnight Nation, The Twelve, The Book of Lost Souls, and Dream Police. He was one of the first television producers to actively engage his fan community online and has consistently explored the interface between digital media and other storytelling platforms. His work for The Twelve has been nominated for this year's Eisner Awards.

Tickets are also available in person at Hub Comics in Somerville and Comicopia in Boston's Kenmore Square.

Buy yours today, as they're expected to go fast.