Chivalry is Dead: SUBA51′s Killer Is Dead, Gigolos, and The Status of (Virtual) Women

This is another in a series of blog posts written by students from my Public Intellectuals seminar in USC’s Annenberg School of Communication and Journalism.

Chivalry is Dead: SUBA51′s Killer Is Dead, Gigolos, and The Status of (Virtual) Women

by James B. Milner


I usually don’t purchase video games without doing my homework. This could take a number of forms. I tend to stick to companies which have produced the games I have loved the most in the past. I closely read reviews from sites like IGN and GameSpot, even though I often take the reviews with a grain of salt. In the buildup to the release of a new title, I will watch any number of video clips to get a sense of whether I will enjoy playing the game, and whether or not it would be worth $60 to get it when it comes out. All of this, and also I keep in touch with the associates at my favorite game store, who let me know what people with tastes similar to mine are reserving.

Ignoring most of my usual tricks, I bought Killer is Dead brand new knowing very little about it. I was informed that it was an indirect sequel to Killer 7 (which I had not played, but had heard good things about) and was by the same Japanese developer (SUBA51) who was behind the No More Heroes franchise. I had played No More Heroes and enjoyed it: I thought it a bit simplistic, and a touch repetitive, but stylish and fun and even a little challenging at times, and overall weird and unique, these last being right up my alley game-wise. And it was a limited edition, which again appealed to me as a collector (I haven’t recreated the shrine since I moved to California, but in Michigan I had all of my boxed sets, art books, stuffies, and other assorted paraphernalia on display on a series of bookcases). So I plunked down my $60 and took it home with me.

And then I read the GameSpot and IGN reviews. Mind you, this post is not a review, nor is it even about the game’s reviews, strictly speaking. It concerns, in large part, a debate about sexism in the game that takes place in the comments on the reviews. It is, however, important for the sake of the discussion to spend a little time on the reviews themselves. And the reviews were—mixed. And consistent. The major knock on the bulk of the action in the game, the fighting, boiled down to this: all you have to do to succeed is alternate between A) simply mashing on the buttons and then B) pressing the dodge button when an enemy attacks. And then I watched gameplay footage, to see if, reviews notwithstanding, I might still enjoy the game, and this criticism seemed to be borne out. Take a look and you’ll see what I mean:

And the only other piece to the puzzle for this game, the only other thing to do in it that doesn’t involve this circuit, are the unfortunate Gigolo Missions.

I say unfortunate—I of course haven’t played the game, which is why this isn’t at all a review, and although from here on out I will be referring to the GameSpot and IGN reviews of the game, it’s really not about those reviews either. But what I found there was enough to make me question whether I could in good conscience play the game and get any enjoyment out of it. Both Marty Sliva of IGN and Mark Walton at Gamespot reported an uneasy relationship (to say the least) with the Gigolo Missions. But what are the Gigolo Missions?

Basically, the goal is to have sex with a virtual woman. How is this accomplished? First, you sit down at a bar next to a woman and order a drink. Then, you ogle her, looking at the appropriate places on her body (you’ll know what you should be looking at, because the area under your gaze will light up if, say, you should be staring at her chest, legs, or crotch). Stare at her enough (without saying anything, mind you), and she’ll ask you for a gift. Gifts are found or bought elsewhere in the game, and, if you bought the limited edition of the game or downloaded some extra content, you got special Gigolo Glasses which will give you a hint as to what she wants (and, of course, give you X-ray vision). Give her the right gift, and you’ll get to sleep with her. And then you’ll be rewarded with a special item. Mission accomplished.

Lest you think I’m making this up, here’s a clip:

The Gigolo Missions are optional, but not strictly so: sometimes the reward item can only be obtained through a Gigolo Mission, and playing the game without these items makes the game more difficult or less interesting in terms of the action. So that you can skip them and still complete the game, but you may make it much harder on yourself if you do. And I had to ask myself whether I could suffer through this aspect of the game to keep it interesting in the action sequences, or if I could skip them as I would have liked to have been able to do without suffering through even more repetitive fights. My answer was a resounding “no” on both counts, so I returned the game unopened and unplayed. The discomfort expressed by the reviewers over the Gigolo Missions, combined with my own disdain for game content which turns virtual women into hollow sexual shells, made it impossible for me to consider keeping it.

Where this really gets interesting is not in the two (male) reviewers’ accounts of their discomfiture as playing the Gigolo Missions, who describe these missions with phrases like “digital creeper” and “filth” and expressed how these missions “felt weird” to play. What is really interesting for me is the discussion that springs up in the comments, and how some participants in this discussion took an antifeminist stance based on a few lines of criticism of the Gigolo Missions in the reviews.

The reviews pointed out misgivings about the misogyny and objectification of women in the Gigolo missions, but in larger part they pointed out technical flaws that contributed to the low scores of the game. This didn’t stop a subset of commenters from focusing on the former criticisms. Some of these comments were what is (unfortunately) pretty standard anti-feminist fare in gamer circles:

GasFeelGood: “People are tired of seeing Internet Feminists forcing opinions as facts and pushing the politicizing of what is imaginary entertainment. This has turned into a cult and this crap operates like organized religion now.

“We want to play games and discuss games, not pseudo-intellectual philosophizing political and social crap that has no significance whatsoever.

“There is no place for subjective political opinion in professional reviews.” To which KillaShinobi replied “They are like Nazis except not intelligent enough to get everyone in on their cause but surely misguided.” (GameSpot)

Atalalama: “It’s gotten to the point anymore that ANY time a “professional game reviewer” (ie: Panders to what’s Socially Fashionable of The Hour, Blathers Gender-Fascism, and/or Comes with a Creamy Undercoating of Purityranical Tropes) slams a game for “degrading women” in some imaginary way, I go out and buy it.” (GameSpot)

IceVagabond: “Here we go again with the neo-feminist nonsense… can we go back to having reviews that critique the actual game more than promote a spiteful (and moreover completely irrelevant) ideology?” (GameSpot)

In these comments, one gets an equation of feminism with Nazism and fascism, as if feminism were concerned with a dogmatic imposition of a coherent and simplified ideology, rather than the breaking down of an entrenched dominant ideology of male privilege. Feminism is multiple, with a variety of aims and a variety of means to achieve these aims, and while there is general agreement that the degradation of women is something to be fought against (rather than a selling point for entertainment media) and that women should be treated equitably, just what this means and how this plays out is so multifaceted that one should hesitate to call it an ideology. But if even if it is granted that it is an ideology, it is not a “completely irrelevant” one that has “no significance whatsoever”: if pointing out that the act of scoping out a virtual woman’s body for sexual favors makes one a “digital creeper” leads to charges of Nazism then clearly the movement has a lot of work to do. And if a culture of virtual objectification doesn’t seem relevant enough, one can get a sense of the broad context of gamer misogyny and anti-feminism by looking at sites like Not in the Kitchen Anymore, Fat, Ugly or Slutty, Kotaku, or The Mary Sue to find an alarming number of disturbing stories of harassment and threats, including threats of rape and other sexual violence, made by male gamers against female gamers, both generally speaking (almost, apparently, as sport) and particularly when speaking up about these very threats or sexism in gaming generally.

Then there are those who downplay the significance of this type of depiction of women:

Christoffer112: “blablabla femenism bla bla bla, who cares.. it”s a game.” (GameSpot)

rnswlf: “ I’m sorry that you are seemingly too intimidated by the female form to appreciate a little light hearted fun.” (GameSpot)

1983gamer: “Also am I the only one who is tired of all the politics and Hippocratic bull crap that is going on in the gaming community? Really reviewer are complaining about bi-gist sexism in games? Really have we forgotten that video games are a art form? Gamers and reviewer alike. First dragon crown now this?? Its really sicking. The Hippocrates that condemn these games are the worse. No one complains when james bond has sex with a random woman..or halie berry having sex. So if you are one of these people male or female, stop using double standards and review or play the game based on how good the game is. Oh and maybe grow up and not watch sexiest movies or play sexist video games.” [33 votes up, 3 votes down] (IGN)

Kratier: “next time you see an attractive male portrayed in a video game you should call it sleazy as well. unless you know, you’re a hypocrite “ (GameSpot)

AugustAPC: “I mean it’s not like I’m going to pretend these are real women or anything. Seriously, why should anyone give a f*ck if women are portrayed as hypersexual whores in a game that doesn’t take itself seriously? It’s in all kinds of media. Shut your brain off and enjoy it or don’t play it. There are plenty of male tropes that are just as negative in video games. Why can a man-slut blindly f*ck any chick he wants in gaming, but girls can’t do the same? Double standards.” [18 votes up, 0 down] To which Ultimatenut replied: “Because in this particular game, the sex missions are just plain weird. You stare a girl in the eyes and when she’s not looking, you stare at her tits and legs. Then you use your X-ray glasses to look under her clothes. And, apparently, as a result of doing this, she goes home with you.” [3 votes up, 0 down] (IGN)

The charge of “double standards” when there is outcry over the objectification of women in games but not the same outcry when men are objectified is a classic argument (both Kratier and AugustAPC go to this well), but of course ignores the power differential between men and women. Men never lose their fundamentally dominant position in society even when they are objectified, while women are consistently subordinate, objectification being a constant aggravation of this. During the making of Animal House, Karen Allen expressed misgivings about showing her bare behind on screen, so John Landis added a similarly gratuitous shot of Donald Sutherland’s rear end, as if this balanced it out. Allen was apparently put at ease, but maybe she shouldn’t have been: as a young, particularly female actor, her half-nude shot risked her being pigeonholed into “beautiful ingénue who does nude scenes”, while Sutherland’s shot risked nothing. His shot was safe both because he was a well-established actor at the time but also because, as a man, he had little fear of not being taken seriously when he needed to be. In other words, for Sutherland, it was “a little light hearted fun”, but for Allen it was a risky career move. The double standard is not in the criticism of objectification, but in society as a whole. For AugustAPC, the fact that the women are virtual “hypersexual whores” removes them from the sphere of reality, where such things would matter, to the sphere of representation, where they (supposedly) don’t, and that the fact that Karen Allen is a real woman negates my analogy since we are discussing the virtual. But the double standard remains even in a virtual space. A “man-slut” is hardly ever referred to pejoratively, but is more often called a “stud” or, tellingly, “the man,” while negatives like “whore” or “slut” are the weapons of choice for referring to women who “get around.” This means that virtual “hypersexual whores” are a problem in a way that “man-sluts” are not because this trope perpetuates in a virtual space the very real inequality that separates the positive connotations of a sexually active man from the negative connotations of a sexually active woman. Representations draw their content from reality, and as such they have the power to perpetuate this type of inequality or to seek to transform it. Killer is Dead sticks closely to the former. The idea that sexism is innocuous when found in something that is “just a game” ignores the fact that such representations reinforce the reality of sexism pervasive in the broader culture, and in doing so help make it seem natural and inevitable.

Two comments in particular are worthy of note, one from each site, since I think they get at the heart of the problem. The first commenter, pseudospike, seems to be attempting to dismiss the charge that the Gigolo missions would be off-putting or offensive to female gamers by posting the following video of professional gamer Jessica Negri playing the missions:

His comment is: “What’s this then, double reverse backwards misogyny!?” (GameSpot) He seems to be trying to play up Negri’s apparent enjoyment of the mission she plays in the video and suggesting that women (as a varied set of individuals) shouldn’t be offended by them because this one woman (Negri) was not, and in fact seemed to have fun while playing. Of course, one can’t decide finally on the basis of the video whether Negri really enjoyed playing the Gigolo Missions or if she was forcing it because she was getting paid to do so. Offering Negri as a representative for women enjoying playing the Gigolo Missions is therefore problematic at best. The idea that one woman’s view negates a flood on the other side is short-sighted and fallacious, and ultimately damaging to the discussion, since it dismisses out of hand the very real concerns of those women (and men) opposed to this type of depiction of women and sexuality. And it is similarly fallacious to point to a woman who is being paid to enjoy what she is doing. Thus, without the irony, this video, or at least its use in the comment thread, may indeed be “double reverse backwards misogyny.”

And then there is DrakeNathan: “It is way too fashionable for game reviewers in the California area to be offended by sexual depictions of women. Honestly, it’s so nauseating listening to these guys try to get a piece by showing how sensitive they are. I know, I shouldn’t assume motives, and I do apologize for doing it, but it’s certainly trendy in game reviewer circles for dudes to be offended by things most girls aren’t offended by. […] There’s a reason I don’t watch certain shows or play certain games, and that’s because they aren’t made for me. I shouldn’t review them.” [19 votes up, 5 down] (IGN, my emphasis)

The point that DrakeNathan misses is that he is basically telling female gamers not to play games at all, because, as numerous gamers and theorists have pointed out, games, especially those for consoles, are almost exclusively made for men. Female gamers must choose from among the games that exist, and since the video game industry has been extremely reluctant to produce gender-neutral or female-oriented games, this means dealing with misogyny, hypersexualization, and objectification to do something they love to do. When a game goes beyond the pale, and introduces gratuitous fantasy sequences such as the Gigolo Missions where women literally ask to be compartmentalized into their most sexually charged body parts, where they want to be gazed at without being spoken to, and where an expensive gift is all that is required for sex, of the one-night stand variety no less, one has to wonder if video game companies are making any progress at all.


The ultimate irony is that while a lot of the comments on the reviews defended SUDA51’s artistic vision in the released version of Killer is Dead, he himself did not:


Kiaininja: “Suda never intended to make KID into a Weaboo eroticism. KID originally was supposed to have a clean deep story of Mondo being a family man surviving to protect them but Suda’s boss ordered him to sexualize and add gigolo to the game and as a result fucked up the story and the game’s original vision.” (GameSpot)

Here is the interview the user cites:

So why did I feel the need to reject Killer is Dead? Couldn’t I just get past the parts I found offensive and play it for the lighthearted and tongue-in-cheek game that it is? Isn’t it “just a game”? Or can it be read as a sign of a tendency of the video game industry to pander to a subset of the audience that likes its virtual women shallow and easy? Can one see it as an indication that the representation of women in video games remains highly problematic? And, in that light, can’t one understand that the defensiveness of those comments I have singled out here against any call for change to this trend of problematic representations is itself a big part of the problem?  In the end, even the game’s developer thought that the Gigolo Missions were unnecessary and detracted from the game, but commercial interests won out over artistic vision. As it turned out, maybe SUDA51’s company was right—the controversy over the missions probably sold more copies of the game out of sheer curiosity (or, as in some of the comments, spite) than it lost sales due to disgust or outrage. Sex sells, and so, apparently, does sexism. But to allow sexism to remain an inevitable part of the industry is not acceptable, for at least two reasons. First, for some of the reasons I outlined above, representations in media have real consequences, and reactionary representations that reinforce an unacceptable status quo have a naturalizing effect which stifles progress. And second, because I suspect that those who desire sexism in their games are far outnumbered by those who tolerate it or suffer it, so that in the end it is unnecessary to sell games. The broader issue remains—sexual and gender equality is a far off ideal, and in many ways it seems farther than usual when looking at the games industry and gamer culture. But Killer is Dead is just one game, and the comments I selected are representative of one side of the argument over sexism in games, a vocal and fairly coherent side but still not the only game in town. It would seem to me that the way forward would be for all sides of the argument, everyone with a stake in the discussion, to voice their concerns in open forums where they can be heard. The real problem with this rather rosy solution is that, as one gets a taste of in a few of the comments I have quoted, there is a real sense in which civil discussion is not everyone’s goal—and this not only on the side of the argument I’m trying to counter here (dismissive terms like “troglodyte,” “ogre,” “moron,” and “idiot” crop up in responses on the other side). But civility is an attainable ideal, at least on a personal level, and I have tried to treat the commenters I’ve quoted here with respect even as I disagreed with them. Hopefully I have succeeded, at least in a small way, in pushing forward a civil discussion.

James Milner is a Ph.D. student at USC Annenberg whose research lies at the intersection of video games, philosophy, and education. He is also interested in issues of gender and race within video games themselves and in the broader gamer culture. He is an avid gamer, but never seems to be able to find the time anymore to play anything except FarmVille 2.

The Regulation of the Chinese Blogosphere

This is another in a series of blog posts produced by the PhD students in my Public Intellectuals seminar being taught through USC’s Annenberg School of Communication and Journalism.


The Regulation of the Chinese Blogosphere

by Yang Chen

On September 9, the highest court and prosecution office claims that non-factual posts on social media that have been viewed more than 5,000 times, or forwarded more than 500 times, could be regarded as serious defamation and result in up to three years in prison.

This new law reflects the tense relationship between the government and the emerging and yet proliferating online public sphere. As one of the 500 million registered users on Weibo (the most popular tweet-like microblog in China), I feel a hint of nervousness. Normally my posts would be read around 500 times – which is far less than the 5000 quota – but Weibo is an open space where anyone can view and comment on any posts. Thus I have to be much more cautious about what I post in order to keep myself out of trouble.

I hope you won’t ridicule my timidity. Everybody has to be cautious, because the first account user who got arrested for violating this new law was an ordinary 16-year-old schoolboy, whose posts questioned the police’s negative act in a case and a conflict of interest in the court (Further information, go to China detains teenager over web post amid social media crackdown). But other than this poor boy from Junior School, there are a group of people who are much more nervous towards this law – the Big Vs.

Who are the Big Vs? Big Vs are the opinion leaders who actively engage in the discussion of political, economic, and social issues online. These prominent figures are followed by more than a hundred thousand netizens on Weibo. Unlike other grassroots users’ hidden identities, these users are verified by the website with their real names and occupations, and there is a gold “V” mark beside their account names that stands for “verified.”


Because these Big Vs are followed by a considerate number of Weibo accounts, their posts or reposts can reach a much larger audience than that of grassroots user accounts. As a matter of fact, though verified accounts only represent 0.1% of the Weibo accounts, almost half of the hot posts (posts being commented more than 1,000 times) were written by them. Thus instead of a We-media platform, Weibo is more like a “speaker’s corner” for the Big Vs; their posts easily get reposted and commented more than ten thousand times. Although everyone has the same rights of free speech on Weibo, some people like the Big Vs speak much louder than the others.

Of course, with real identities and huge popularity online, they are also much easier target for this new law. Let’s take a brief look of what happened to some of the big Vs recently.


Most Big Vs are Chinese venture capitalists and investors; they would put their properties at risk if they go against the government. Thus not surprisingly, there has been an inclination that the Big Vs chose to cooperate with the government.


After an account is verified and branded with a “V,” the website fits the account into categories such as education, entertainment, business, and media. The verified account enters the “House of Fame” under that certain category, and be recommended to general accounts which are relevant to that category. This move leads to closer connections among the people under the particular category and would simultaneously distance people in the other categories.

Earlier this year, the website has asked all users to fill in their education backgrounds and the newcomers to register with their phone number. This move would also allow the website to identity users’ background information and recommend them to people who have similar backgrounds. As a result, highly educated individuals are communicating with other highly educated individuals; individuals with lower education, with lower educated individuals.

Due to this classification, a user who follows a verified Weibo account will recommend the verified account to members within their groups, so people end up following the same verified accounts. This system creates information barriers. For instance, the likelihood that a high-educated member will recommend a verified account with lots of helpful and accurate information to a lower educated member who is in another group is slim. The lower educated member may never be given the chance to increase his or her access to information, although both are using the same networking service.

Users are also separated by geographical location. Individuals from northern regions are speaking to individuals also from northern regions; individuals from southern regions, to individuals from southern regions. Each user is matched into groups based on the user’s characteristics and is subject to an environment where the user can only meet other users similar to the user. From this process, these groups are drifting further and further apart from one another.

Not surprisingly, I have found out that users from outside the country also are segregated from domestic users as well. When I first come to US, I have registered a Weibo account using my U.S. mobile phone number. I found out my posts have been deleted very often secretly without any explanation from the website. It is even more ridiculous that on my personal page, everything looks fine, but on my followers’ page, these posts secretly disappeared. If my friend had not told me, I would never have known.

A screenshot from My follower’s page

The Screen Shot from My Page

As I have shown, the post in the red circle was shown on my personal page, but deleted in my follower’s page. I found the similarity of my “deleted” posts: all of them having the common word “activity,” since I were spreading the information about USC’s upcoming events – some of these events are not even related to China or Chinese regime. Because some of these posts were deleted the second after I posted them, I guessed that a strong automatic filter system was applied to my account – maybe because my U.S. mobile put me into a more sensitive position. I was right! After I changed my mobile number into a Chinese domestic number, I never encountered another deletion. The segregation is really simple, yet effective; there’s no doubt that the censor system creates more information barriers.

The big Vs constitute the verified accounts that each followed by millions of people, that make them serve as the “links” among different groups. Controlling these links means further isolating the different groups and getting a tight grip on the information flow on Weibo.

The purpose of the policy maker is to develop a regulated and peaceful internet public sphere. However, we should bear in mind that the word “peace” doesn’t equal  “quietness” or “weakening voices.” There are obviously problems to be solved, voices to be heard. If tears were burried deep in one’s heart, it doesn’t mean the wound is not there anymore. I will end this blog with an old saying in China, “防民之口,甚于防川:” it means if you trap water in a stream, there would be a disastrous flood; if you shut up voices from the public, a worse disaster would be waiting ahead.The old saying is from thousands of years ago, but the words transcend time and still apply today; the Chinese regime should still take lessons from the wit of our ancestors.

Information Darwinism

This blog post was produced by one of the students in my PhD seminar on Public Intellectuals, currently being taught at USC’s Annenberg School of Communication and Journalism.

Information Darwinism
by David Jeong

The brain craves information. Individuals demonstrate high preference for novel, highly interpretable visual information (Biederman & Vessel, 2006). This preference stems from an evolutionary advantage that an information-rich stimuli/image/environment would provide over a barren environment. Neuroscientists have even provided evidence we have a bias for irregular, non-singular shapes/curved cylinders over regular, singular shapes/cylinders (Amir, Biederman, & Hayworth, 2011). Simply, human beings are not carnivores or omnivores– rather, we are info-vores. And oh boy, do we have a lot of information– we can presently access more information than ever before in our evolutionary history (I hope I can make this claim?).

Since our brains evolved to solve the problems of our ancestral environments (Cosmides & Tooby, 1992), we may be experiencing a capacity load crisis in the amount of information we can remember, understand, or care about. Whether intentionally or not, we are constantly sifting through information in our environment– we always have, not just in present day. My main argument is that when we have as much information at our disposal as we have today, there must be casualties.

One type of information that does seem to thrive is novel information– we are constantly sharing and re-distributing “original content”. It is no coincidence that we receive pleasure from new information. Competitive Learning Theory, otherwise known as “Neural Darwinism”, occurs when strongly-activated neurons among a network of activated neurons inhibit the future activity of moderately-activated neurons upon recurring presentations of an image (Grossberg, 1987). The strongest-activated neurons dominate these future perceptions of a particular image, resulting in a net reduction of neural activity. This means that neurons prefer novel stimuli because they have yet to undergo Neural Darwinism.

The information in the current media sphere seems to also be undergoing its own version of what I will refer to as “Information Darwinism”:

* Given two forms of information, the novel information will dominate over the replicated.
* Given two forms of information, the simple information will dominate over the complex.
* Given two forms of information, the visually appealing will dominate over the neutral.
* Given two forms of information, the humorous (which also implies novelty) will dominate over the banal.
You get the picture.

Of course, novel information does not always reign supreme. Nostalgia and familiarity are counter-examples of this pattern. That said, nostalgia would not be nostalgia if it was pushed to our attention daily. Nostalgic content can only become effective through intervals of inattention.//

We have a bias for the fantastic, the amazing, the horrible, and disastrous. Most of the time, we are not interested in what occurs most of the time. We disregard the status quo.

What I mean by Informational Darwinism is that amidst the massive amount of information being pushed into our brains, we are witnessing an information-based natural selection where novel, simple, and visually appealing information dominates.

Not only are shorter, simplified forms of information (memes, Twitter updates, Facebook statuses) winning out, these forms of information champion novelty (original content, humor), and visual appeal. These “predators” are feasting on information that maintains a degree of persistence, permanence, and god forbid– patience. Public discussion of climate change, ongoing conflicts overseas, inner-city poverty, and our tremendously dysfunctional health care industry are simply being driven to “extinction”.

Tversky’s and Kahneman’s (1982) availability heuristic suggest we attribute greater probability and frequency to information that is more readily available in our minds. Perhaps the more troubling issue is the potential for a naturalistic fallacy to take place: that the survival of the fittest indeed yields the “fittest”. Ultimately, “fitness” should refer to physical survival — and indeed, accurate and proper communication of health and political issues do indeed have implications for life/death– but I feel it also encapsulates physical and mental health, financial stability, and any domain of social life that represents a form of success. As such, “fitness” here refers to the positive impact on the most number of people– regardless of race, gender, nationality, religion, and the like. In other words, we may be fooling ourselves to think that the information that our mind’s eye is attending to is indeed the information most worthy of our attention.

The information that survives is information that garners our collective attention, that captivates the collective consciousness. This information may be biased, inaccurate, or may simply be fictional content intended for entertainment– which is not to say that such information is meaningless as it represents the social reasons for sharing information in “spreadable media” (Jenkins et al., 2012).

So, not only are we wired to prefer this attention-grabbing information, this attention-grabbing information is concurrently being reproduced and shared at the expense and demise of information that is less attention-grabbing.

Problem: We have already been primed with much of the important information in the world.
Another Problem A: Less attention-grabbing information tends to be information we already know, information that is complex.
Another Problem B: Important information tends to be information we already know, which tends to be less attention grabbing.
We know diet coke is bad, we know much of the Middle East is under various sorts of turmoil and conflict, we know, we know. We just can’t bring ourselves to care about this information more than the next episode of Breaking Bad, or the top post on the front page of Reddit.

This is not to say that Breaking Bad offers less desirable information or a less desirable mode of delivery. In fact, its writers demonstrated an example of a truly complex form of narrative that goes against the traditional and familiar TV narrative. It is precisely its creativity and originality that makes it a champion of TV ratings and our collective consciousness.

That said, annual re-runs of Breaking Bad– while remaining strong in popularity, will inevitably decline in ratings and our collective consciousness over time. Aren’t “ongoing issues” basically “re-runs”?

The Irony: “Fittest” information = information that provides a positive impact to the most people. “Fittest” information represents the essence of morality and altruism. Ironically, the information that is becoming “extinct” is the information that is most crucial for our collective success, survival (Perhaps collective survival goes against the central tenets of natural selection?!). //

Complex concepts in science are often misunderstood because they are simplified and thought in terms of “linear causality” with a singular cause and effect, when in fact science often involves a complex system of causality that may be iterative, cyclical, and take place over time and space (Grotzer, 2012). According to Grotzer, we simplify causality due to our preference to attribute agency to conceptual understandings, our tendency to make cognitive heuristics (Tversky & Kahneman, 1982), and our limitations of our attention (Mack & Rock, 1998). Our visual perception is subject to natural tendencies of not only our attention, but also differences in the perception of visual images in our central vs. peripheral visual fields.

With a world of images, memes, and 350-character messages, we cannot help but be deterred from complex understandings of crucial political and scientific issues– let alone an accurate and complete understanding of those issues. The non-immediacy of these issues means that they do not alert our attention or perceptual systems as would an elephant charging towards us. Rather, inattention and conscious ignorance of non-immediate, non-perceivable issues (radiation-contamination, global warming, GMOs, etc) all involve gains that are immediate and gratifying (fresh sashimi, convenience and laziness, cheap food, etc) and harms that are tacit. Even more troubling is the exploitation of our cognitive limitations and tendencies for harmful consequences. Sensory formats (visual/auditory advertisements, and even tastes) are now engineered to target sensory vulnerabilities while we overlook non-sensory information (global warming, obesity, risky decisions, any decision with a positive short term and negative long term outcome.

This is not necessarily a value judgment against the Breaking Bads, the Twitters and the Reddits of the information world. Rather, there is much to learn from these thriving models of information. There is a wealth of “fit” information intertwined with entertainment on these newer modes of information dissemination. If anything, perhaps we have to move past the “iron curtain” of network news, academic fluff, and the like. We are facing a communication gap, a failure of learning, and a reality that is increasingly at odds with traditional communication environments. If there is indeed an Information Darwinism underway, we cannot continue to beat the dead horse with “what used to work”. It is our moral obligation to engage in our own pedagogical arms race against the changing information landscape in order to maximize information that yields the most physical, mental, social “fitness” for as many people as possible.



Amir, O., Biederman, I., & Hayworth, K. J. (2011). The neural basis for shape preferences. Vision research, 51(20), 2198-2206.

Biederman, I., & Vessel, E. (2006). Perceptual Pleasure and the Brain A novel theory explains why the brain craves information and seeks it through the senses. American Scientist, 3(94), 247-253.

Cosmides, L., & Tooby, J. (1992). Cognitive adaptations for social exchange.The adapted mind, 163-228.

Grossberg, S. (1987). Competitive learning: From interactive activation to adaptive resonance. Cognitive science, 11(1), 23-63.

Grotzer, T.A. (2012). Learning causality in a complex world. Lanham, MD: Rowman and Littlefield Education.

Jenkins, H., Ford, S., Green, J., & Green, J. B. (2012). Spreadable media: Creating value and meaning in a networked culture. NYU Press.

Made by Hand, Designed by Apple

This is yet another in a series of blog posts authored by the students in my PhD seminar on Public Intellectuals, being taught this term in USC’s Annenberg School of Communication and Journalism.

Made by Hand, Designed by Apple

by Andrew James Myers


Apple’s recent release of two new iPhone models — the iPhone 5s and 5c — was heralded with a pair of videos celebrating the aesthetics of each of the devices’ design and physical materials. The first, a 30-second spot entitled Plastic Perfected played at the 5c’s unveiling and aired on national TV, shows abstract swirls of liquid colors against a white background, gradually molding itself into the form of the iPhone 5c’s plastic shell. Other components, like the camera and the small screws, emerge spontaneously from within the molten plastic, until the idea of the iPhone is fully materialized, having literally created itself.



The other video, a companion piece also shown at the company’s iPhone presentation, depicts a mass of molten gold against a black background, swirling elegantly and weightlessly to sculpt itself into the iPhone 5s. Hovering components gradually descend into place, and the phone spins to present its finished form.



Over this past year, in my research of Apple’s marketing, I have watched hundreds of Apple’s ads and promotional videos extending back to the 1980s. For me, these most recent iPhone promotional videos were a surprising addition to this research, as they embody the purest and most potent distillation yet of a longstanding trend in Apple’s marketing. Apple’s marketing texts have long been preoccupied with constructing a certain aesthetic myth for the creation of Apple products. This mythical origin story at its essence taps into notions of vision, creativity, and genius while obscuring the devices’ real-world material origins as the product of concrete human labor.


Apple frequently releases “behind-the-scenes” promotional trailers for each of its major product launches. In Apple’s (widely-accepted) view of product creation, the valuable labor occurs in the realms of engineering, design, executive leadership, and software engineering. This is reflected in two significant patterns in the visual rhetoric of its behind-the-scenes videos: exclusive focus on automated robotic assembly processes, and animated visualizations of components spontaneously self-assembling against blank backgrounds. In the narrative framing constructed by these three rhetorical patterns, human labor at assembly factories like Foxconn is completely erased, written out of Apple’s corporate self-identity.



For example, consider the above making-of video for the iPhone 5c. The first visual pattern, exclusively showing automated labor rather than human labor, is always accompanied by a verbal discussion of manufacturing innovation. As we watch Macs and iPads being built, we almost never see a pair of human hands; in fact, I have been completely unable to find a single instance where worker hands — much less a full body or face — are shown in an Apple video made after 2008. Hands as a visual symbol and touching as a ritual are instead reserved for the consumer (“The fanatical care for how the iPhone 5c feels in your hand”), with frequent close-ups of disembodied hands touching, gripping, manipulating the product’s glossy material glory.


Second, Apple’s particular imagination of creation is manifest through its animated visualizations of how components fit together inherently and effortlessly. In one major type of these animations, components float in layers in the air, slowly and gracefully layering themselves into a snug assemblage. The molten-plastic and molten-metal ads discussed at the beginning of this post are merely the most recent (and visually extravagant) iteration of this aesthetic. Designing how components will fit together into ever-shrinking cases is essential to Apple’s hardware aesthetic obsession over making products as thin and small as possible. The designers’ work of putting the jigsaw puzzle together conceptually is seen as the real feat; actually putting it together, on the other hand, is trivial.


The visual rhetoric embedded in Apple’s videos clashes intensely with how Apple’s production process has recently been covered by journalists. Beginning in 2006 and climaxing in early 2012, the popular media has actively worked to raise awareness of the labor conditions of the individuals who work in the overseas factories producing Apple’s popular iPods, iPhones, iPads, and Macs (along with, secondarily, the electronics of almost every other major brand). This sensational story gained wide exposure by juxtaposing the brand mystique of Apple — perhaps the most meticulously and successfully branded company in the world — with a dystopian behind-the-scenes narrative completely at odds with Apple’s image. In response to this narrative in the Media, Apple has responded with a number of public relations initiatives, including a few  laudable measures that have genuinely improved supplier transparency and labor conditions. Yet, as labor violations in Apple’s supply chain continue to surface, and as Apple’s publicity materials continue to gloss over the human labor involved in product assembly, it is clear that much more needs to be done to address these issues.


A few weeks following two high-profile reports in the New York Times and NPR in early 2012, Apple responded to the negative publicity with a press release announcing that it would for the first time bring in a third-party organization, the Fair Labor Association, to independently audit its suppliers.[1] Apple also exclusively invited ABC news to visit the audit, yielding a 17-minute story broadcast on ABC’s television newsmagazine Nightline.


The Nightline piece offered the first journalistic footage from inside Foxconn’s assembly facility, and the pictures produced were astonishing. Reporter Bill Weir expresses surprise at the magnitude of manual labor he sees, repeatedly suggesting that simply seeing the factory process at work will cause viewers to “think different” about their Apple products. “I was expecting more automated assembly, more robots, but the sleek machines that dazzle and inspire… are mostly made by hand. After hand. After hand.” On Apple’s historical secrecy about its product manufacturing, Weir offers one interpretation. “If the world sees this line,” comments Weir over footage of a long, crowded assembly line, “it might change the way they think about this line.” Cut to a shot of a huge crowd of American consumers lined up to get inside a New York City Apple Store at a product launch.


What the Nightline piece lacks in the kinds of sensational details of other reports on Foxconn, it makes up for with the sheer visual impact of the startling images. We see exhausted workers collapsed asleep at their stations during meal breaks, the infamous suicide nets, the cramped 8-to-a-room dorms, and the apprehensive demeanor in the faces of prospective employees lining up outside the gates. The report even stages a moment in which the reporters visit a town and show an iPad to poor parents of Foxconn workers, none of whom have ever seen one.


After ABC’s first exclusive look inside Foxconn, other reporters were granted access to the factory, leading to a significant rise in video footage being broadcast and circulated online. More and more people were being exposed to the reality that iPads and iPhones are made by hand, by real humans struggling in almost dystopian conditions.


As I have researched and grappled with these issues, I have collected every relevant video I could find onto to my hard drive, which has over time become quite an exhaustive archive of Apple’s promotional material. At the same time, as I attempt to write about my research, I am frustrated at my incapability of fully conveying so many of the visual qualities of the videos I was analyzing in written form. My initial interest in the topic had sprung from an intangible, emotionally-entangled reaction I had to the aesthetic contrasts between Apple’s promotional videos and journalists’ Foxconn coverage — and I wondered whether it would be possible to make more impactful points through a visual essay rather than a written paper.


At first, I had in mind little more than a rather conventional expository documentary — nothing more than an illustrated lecture. But after taking Michael Renov’s fantastic seminar on documentary, I decided to try something a little more avant-garde. Inspired by documentary essayists such as Emile de Antonio, Jay Rosenblatt, Alan Berliner, Hollis Frampton, and Elida Shogt, I was interested in testing out these filmmakers’ innovative editing techniques for constructing original arguments by re-appropriating archival footage. I realized it might make a difficult and enlightening challenge to create a compilation documentary purely with archival footage — without voiceover, interviews, or text. I finished a 12-minute first cut of video essay this summer, and the result is below.

In contrast to the affordances of the written essay, one strength of the video medium that surfaced during editing was an ability to engage more directly with the kinetic and haptic experience of the body. In her essay “Political Mimesis,” Jane Gaines describes revolutionary documentary’s ability to work on the bodies of spectators, to move viewers to action. “I am thinking of scenes of rioting, images of bodies clashing, of bodies moving as a mass,” writes Gaines, suggesting that “images of sensual struggle” are a key element of a number of political documentaries. Gaines argues that certain depictions of on-screen bodies can produce in the audience similar bodily sensations or emotions, which inspired me to focus in my video essay on the concrete bodily attributes of sweatshop labor.


Gaines’s article brought me to formulate the central recurring visual motif of the film: a montage of close-up hand movements. I wanted to illustrate the corporeal vocabulary through which American consumers define their interaction with technology (moving and clicking the mouse, gesturing on a trackpad, tapping and swiping on a tablet), and offer in contrast the bodily relationship factory line-workers have to those same devices: repetitive, slight, monotonous movements.


As mentioned previously, the human bodies of workers — even their hands — are conspicuously absent from the footage Apple uses in their promotional videos about the making of their products. I tried to draw attention to this gaping corporeal absence with an extended montage segment of these fully-automated factory processes played simultaneously over an audio track explicitly addressing the harsh conditions for the factory workers we’re not seeing. I hoped that by explicitly cultivating a sense of mimetic identification throughout the rest of the film, the sequences of hands-free assembly would stand out as somewhat ghastly and unnerving.


Whether this film is successful in communicating its analysis is for others to decide; for me, I both enjoyed the novel experience of making it and feel like the video editing process forced me to think about the material I was working with in new ways. Focusing on making an argument through juxtaposition pushed me to look new contrasts and valences between bits of material I had not noticed before, to consider formal elements like timing and word choice with a new level of scrutiny, and to see my potential output as a researcher and advocate as perhaps not limited strictly to writing books and articles.

Andrew James Myers is a Ph.D. student in Critical Studies at the University of Southern California, and holds an M.A.
in Cinema and Media Studies from UCLA. He is post-processing editor for the Media History Digital Library, and
assisted in the creation of Lantern, an online search tool for archival media history. A former co-editor-in-chief of
Mediascape, his research interests include media industries and production culture, archival film and television history,
new media, and documentary.



[1] Apple Computer, Inc., “Press Release: Fair Labor Association Begins Inspections of Foxconn,” (2012),

Mules, Trojan Horses, Dragons, Princesses, and Flies

The following is a post written by one of the students in my PhD seminar on Public Intellectuals being taught this semester at the USC Annenberg School of Communication and Journalism.

Mules, Trojan Horses, Dragons, Princesses, and Flies

by Addison Shockley 

Shortly after I arrived at the University of Southern California, to begin working on my doctorate in communication, there was a banquet to welcome the new group of communication and journalism graduate students. There were round tables, and people sat where they liked. At one point, the Dean of the Annenberg School of Communication and Journalism came up to the table I was sitting at and asked us about our table and what kind of students we were, whether we were journalism students or communication students or a mixture of the two. I spoke up for our table and said, “We all happen to be communication students,” and then added, “It was natural that we all sat together.” He replied, “Just because something is natural doesn’t mean it’s good.” Then he had to walk away because he was being called to give the opening speech, and I sat there and felt a little bit foolish.

He was right, though. Natural is not necessarily good. It’s natural for dragons to take princesses captive in their lairs, but it’s not good for princesses to lose their freedom. It’s natural for flies to be drawn to the light, but it’s not good for flies themselves to be electrocuted when the light’s a bug-zapper. Dean Ernest Wilson believed this principle of distinguishing between what’s natural and what’s good—that sometimes they’re the same and sometimes they’re not—and I believe it too.

This story is a small part of a larger story, the story of my journey of becoming who I am today. When I began my undergraduate education in 2005, I would never have been able to predict that I would be doing a Ph.D. today and examining ideas like rhetoric and the tragedies caused by misunderstanding. When I graduate in a few years, I hope to teach rhetoric in a university and share insights about communication and (mis)understanding with my students, as well as the general public.

It is a commonplace to assert that we are living through a communication revolution, and of course people are studying the ways in which our lives and communication practices are being revolutionized by new technology and new media. This is important work, but I prefer to focus on what I call the “foundational” issues in communication, questions related to what human beings are, and why they should communicate with others in the first place; questions about communicating what we know, and how we know it; questions about values, and how we communicate in line with them, about them; and questions about what it means to communicate purposefully and wisely. These are the questions I believe need to be addressed today alongside the more timely questions about technology, new media, and the ways our world is being transformed by them.

None of our experiences are wasted, or so my mother tells me (and I think she’s right—at least, they don’t have to be wasted). In this post, I share some of my personal history to discuss how it relates to who I have become, and how it has shaped my perspective on a set of real world problems that I’ll share with you.


Like many college students, I changed my major multiple times, uncertain what I wanted to do with my life. I began in the fall of 2005, studying film as a freshman at Azusa Pacific University, which only lasted a year. I decided to return home to Kansas City, Missouri, having realized the film industry was harder to break into than I thought it was, as well as less appealing than I had imagined it to be in my mind, and I began considering what to do next. I took a year off, so to speak, talking with friends, exploring options, and finally made the decision to transfer to the University of Central Missouri—forty minutes east of my hometown of Lee’s Summit, Missouri—to begin classes in the fall of 2007 as a mule (the school’s odd choice of a mascot), majoring in—get this—Construction Management.

My dad had been a construction manager at one point, and he really liked it, so I thought I’d give it a shot. I began taking classes like “Mechanical Systems of Buildings,” talking about beams and pneumatic nail guns, and wondering why anyone would want their mascot to be a mule. (Okay, secretly, I kind of liked it. Mules are humble, but confident). Three semesters later, with an internship under my belt—shadowing construction managers who built mostly fast-food restaurants and office buildings—I realized this kind of work didn’t appeal to me anymore, at least not enough to make it my bread and butter.

Studying construction had taught me some important things, but I knew it was time for me to move on. I didn’t feel like I was getting the hang of it from the classes, and it seemed I wasn’t a natural at it—not even close. Evidence of that includes my “internship boss” yelling at me the last week of my internship and telling me I had been a failure. He was having a bad day. There was some truth to it, though. I lacked the devotion, preparation, guidance and giftedness to do a good job, simply put. Rather than wanting to read blueprints, I wanted to read novels; rather than daydreaming about building, I wanted to use words to make a difference in society. I had taken enough classes to get a minor in Construction Management, and it taught me how to think about the world concretely, though I was not built for it.

Soon after I lost interest—for good—in construction as a career, I discovered “rhetoric.” The fact of the matter is, I took my first course in rhetoric during my senior year of college, but almost immediately, I knew this was “it.” For the sake of clarifying what I would want you to imagine when I say “rhetoric,” replace whatever comes to mind with this definition of rhetoric from a famous twentieth century scholar of rhetoric, I.A. Richards (from his book The Philosophy of Rhetoric), who defined rhetoric as “the study of misunderstanding and its remedies.” To have a “rhetorical” sort of imagination is to be someone capable of pinpointing instances of misunderstanding and then to know how to work on them in order to undo them.

In rhetoric, as a rather marginalized academic subject these days, I found something I cared about. I liked it so much that I decided I would do two more years of Master’s level coursework mostly in theories of rhetoric at the same school—remaining a humble, yet confident mule. I enjoyed this experience very much and began to see myself studying rhetoric at the doctorate level.

I applied to doctorate programs during the fall of my second year into the Master’s level coursework, and I got accepted at USC in the spring. My wife and I moved to Los Angeles in the following fall to begin the next phase of our lives. I left the mule and started riding a Trojan horse, so to speak.

A few years prior I had left a career in bricks-and-mortar construction—well, a potential career in construction by abandoning my major in Construction Management. I was going to pursue a career in “words-and-ideas” construction—not physical construction, but cultural or, as it is sometimes called, “social construction.”


Image A-Hyatt Regency Walkway Collapse 1981 (1) It should be clear from events like the historic Hyatt-Regency walkway collapse in Kansas City, Missouri, or from the recent five-story building collapse in Mumbai that construction is risky.

Image B-Mumbai five story building collapse photo 2013


Misconstruction can be fatal, whether we are dealing with a physical structure that is literally misconstructed, or figuratively with a faulty idea or mistaken assumption that is used as the basis for further thinking. And it may be an obvious side note, but miscommunication can cause misconstruction, as in when blueprints (poorly designed) are used to communicate building procedures that result in faulty structures.

Today, I tell people that I study misunderstanding and how it messes things up, royally. I am convinced it happens all the time, all around us, sadly without the notice of enough people. Let me explain this phenomenon of “social misconstruction,” and then give some examples of it.


Rhetoricians care about misunderstanding, which results in the social misconstruction of reality. Wait, social what?

When I say the “social construction of reality,” I’m using a phrase that most academics within the humanities and social sciences have heard of—which can refer to the idea, essentially, that reality isn’t really real, truth isn’t truly true, right isn’t actually righteous, et cetera. Countless interactions among persons in societies result in commonly held assumptions about what exists and how we should respond to it, and these commonly held assumptions create the illusion that there is a single reality because most people seem to agree that, well, this is just the way it is: so it must be so.

There are other versions of this idea of social constructionism, some of which allow that an objective reality exists—and those versions are more convincing to me. But according to this radical version of social constructionism, people “construct” reality through interactions with other members of their culture or society (or world), and in this sense, they “make it up” using language. Rather than discovering reality, and disclosing it with language, they do the reverse, creating reality with language. I agree that ideas of reality are rooted in communities. But I don’t believe that all communities are in touch with reality. I believe that people can’t contain absolute reality in a box or in a system of ideas, but I believe that it exists, despite our limitations in apprehending it fully.

This is obviously a deep subject, one we could read libraries full of books about. I hold the minority perspective—the realist view that, on the one hand, there is reality, and on the other there is unreality. There’s true, and there’s false. Another term that is often used alongside the phrase “the social construction of reality” is the phrase “intersubjective agreement,” which refers to agreements made about what’s what in life, what things mean and don’t mean, what reality is. This term suggests that reality is nothing more than what we agree upon that it is.

But there’s a problem with this idea. Persons under arrest are either guilty or innocent of their alleged offense. I believe in the valuable work of socially constructing one true reality, and in the wasted time spent constructing unrealities. We don’t create reality; we bump into it. Even if we aren’t sure what we’re bumping into, it’s got a personality, and we better learn it. It’s got rules. And it’s got rewards. As one of my friends said, if you break the rules of the universe, they will break you. Denying reality is a slippery slope to some bad back pain.

Most rhetoricians these days don’t believe in “real reality.” They probably wouldn’t tell people they’re interested in misunderstanding, as I do; they’d probably say they’re interested in multiple understandings, and they would probably be uncomfortable with any claims about misunderstanding (after all, can someone be said to misunderstand a world that isn’t real?).

I believe in the metaphysical parallel to physical blindness: the eyes of minds can be distorted in vision, and effectively blind to what is actually happening. Rhetoricians like me care about social misconstruction and misunderstanding, but also about the related issues of miseducation, miscommunication, misassociation, misinformation, disinformation and deceit.

Richard Weaver, another famous rhetorician, taught that “ideas have consequences”—and Kenneth Burke—perhaps the most famous modern writer on rhetoric—taught that words imply attitudes, which suggest actions. For example, naming someone as an “enemy” implies an attitude toward them that encourages certain actions and discourages others, whereas calling them a “friend” would suggest a different attitude, and thus, different actions.

Just to give a few examples of social misconstruction, we can think for a moment about misassociation. We need to cultivate discernment among our citizens so that they can disassociate what has been misassociated, because misassociating things can disserve and harm people in serious ways.

I was eating dinner with some friends the other night, and they shared some disturbing facts with me. One of them works in Uganda, and she told me that social misconstructions of class have led to the malnourishment of children in Uganda because their society has constructed an association of eating vegetables with being poor. Their parents don’t want to feed their children what might be thought of as “poor people’s food.” My other friend from India, who was dining with us, chimed in and added that in India, white rice is associated with a higher-class diet than brown rice (even though brown rice is healthier). These examples, although limited to food, show how any society can contain “misconstructed” meanings and associations, which contribute to the “breaking down” of lives.

To give another example, this time from the United States, we can briefly consider the work of communication scholar George Gerbner to help us see how in the United States, where the average person watches more than four hours of television per day (see footnote for reference), the cumulative effect over time is that representations in television begin to cultivate misperceptions in Americans of social realitFor instance, persons who watch crime shows like Law & Order, or CSI, or Criminal Minds, may perceive that social reality closely matches the depictions in the shows themselves. To give an example of how social misconstructions can emerge from long-term exposure to such shows, consider how CSI, for example, consistently uses unrealistic depictions of the technology available to death investigators.  Or consider how Law & Order messed with popular perceptions of what constituted an adequate quantity of evidence to convict someone of an alleged offense, resulting in U.S. jury trials in which jury members required overwhelming amounts of evidence to be comfortable deciding that a defendant was guilty.

These minor examples address the ways in which, taken together, instances of media content consumed over a long period of time can influence people’s perceptions such that they misassociate, again, say, guiltiness only with overwhelmingly unusual amounts of evidence—more than is typically needed to establish a high enough probability of guilt to declare a person guilty.

Misassociation can take many other forms than this, of course, and much is lost due to mistakes of association. It is the job of the rhetorician to spot them, and zap them with his rhe-gun. We don’t want anyone to keep misassociating what is natural with what is good. (And for further examples of social misconstructions, see the work of Richard Hamilton, who tries to show the mistakenness of a few widely held views in academia:

Addison Shockley is a doctoral student studying rhetoric, media, and ethics at the USC Annenberg School for Communication. One of his guiding assumptions is that miscommunication, rooted in deception and/or misunderstanding, causes devastating results, and he is motivated in all of his researching, writing, and teaching by the idea of straightening out socially misconstructed realities. He’s recently started blogging at

Revisiting Neo-Soul

The following is another in an ongoing series of blog posts from the remarkable students in my Public Intellectuals class. We would welcome any comments or suggestions you might have.

Revisiting Neo-Soul

by Marcus Shepard

Popular music blog Singersroom recently asked an interesting question “Will Alternative R&B fade away…like Neo-Soul & Hip-Hop Soul?” What’s interesting about this question for me is that neo-soul and some would argue hip-hop soul, never truly found a definition or a sonic boundary to differentiate them from other genres of music during their rise in the 1990s. Different posts could and should be written on hip-hop-soul and even the validity of the term alternative R&B as it appears to be a term used to describe white R&B artists who make music similar to the likes of the Black R&B artists such as the late Aaliyah, Brandy and Monica. I want to focus though on exploring the genre neo-soul for a moment. It’s important to engage with neo-soul because a lot of people believe that this musical discourse has either faded away or was a flash pan marketing nomer that lost steam as we rolled into the new millennium, which is not the case.

While fans of the music labeled neo-soul can often identify those songs, albums and/or artists that falls under the genre label, due to the lack of a potential concrete definition, the once burgeoning genre has become harder to define. As different sonic discourses continue to mix and create unique sounds, defining and creating boundaries for what is “neo-soul” and what is not might become increasingly more difficult. Though the label neo-soul has come under scrutiny from artists, musicians, fans, critics, and academics about its validity as a term/genre, others have gravitated towards the usage of the term. The confusion of what neo-soul is adds to the debate surrounding this genre. Defining neo-soul is not to exclude artists who are on the periphery or crossover into the genre, but to give space to those singers and musicians who are entrenched within the discourse of the music.

With the introductory track “Let It Be,” Jill Scott also expresses her frustrations with being labeled and defined as a neo-soul artist. Though Scott does not openly state her anguish with her categorization in the genre, she states an all too familiar cry of artists who simply want to make music devoid of classification.

What do I do
If its Hip Hop if its bebop reggaeton of the metronome
In your car in your dorm nice and warm whatever form
If Classical Country Mood Rhythm & Blues Gospel
Whatever it is, Let It Be Let It Be Whatever it is Whatever it is Let It
If it’s deeper soul If It’s Rock & Roll Spiritual
Factual Beautiful Political Somethin to Roll To
Let It Be, Whatever it is, Whatever it is Let It Be
Let It Be Whatever it is Let It Be Let It Be, Let It
Why do do do I,I,I,I
Feel trapped inside a box when just don’t fit into it

Through her sorrow of being defined and “trapped inside a box,” Scott has also excavated what neo-soul is. Though she and other artists often have fraught relationships with the term, understanding it as a convergence of sonic discourses within the soul and rap musical traditions opens up a variety of sonic avenues that Scott, her peers, and predecessors have pursued. Scott, who often transcends the boundaries of different genres, is able to rise above these very boundaries due to the essence of neo-soul. This genre allows her to waft into the vocal atmosphere with operatic vibrato during the closing number (“He Loves Me (Lyzel In E Flat)”) of her concert, just as it allows for her and Doug E. Fresh to beatbox during their collaborative “All Cried Out Redux”.

Being situated on the periphery of two musical legacies, soul and hip-hop, allows for artists within this convergence to coif a variety of sonic discourses that draw on the technical, sonic, and lyrical innovations of both genres. The jazz, gospel, and blues influences of soul, which in their own right offer rich musical and lyrical histories, further add to the wide-range of sonic possibilities that artists within neo-soul can tap. Include the plentiful sonic and lyrical options that hip-hop has to offer and the neo-soul genre seems to have boundless opportunity to grow and coalesce.

So what does neo-soul sound like? Neo-soul is a genre that is an amalgamation of rap and soul music, which relies on the technological advances made during the genesis of rap, but at the same time readily uses live instrumentation of the soul era. Neo-soul builds upon sampling through its own reinterpretations of soul records such as D’Angelo’s take on Smokey Robinson’s “Cruisin’,” or Lauryn Hill’s Frankie Valli cover of “Can’t Take My Eyes Off You”. Though these two are traditional covers, Hill and D’Angelo infect a hip-hop backbeat that is heavily pronounced throughout most neo-soul records. Hill’s cover of “Can’t Take My Eyes Off You” in particular opens up with beatboxing, which then leads into the sonic composition of the song and melds into a traditional rap backbeat. Though covers are not unique to this genre, when they occur within the confines of neo-soul, the musical composition of the song is often tweaked/reworked to reflect the sonic collaborations that define the genre.

Though possibilities seem endless for neo-soul, there is a distinct sonic quality that exists within the genre. Neo-soul is deeply rooted in live instrumentation and referencing the liner notes of the majority of neo-soul releases showcases the inclusion of studio musicians. Though synthesizing and sampling is present within neo-soul, it is building a legacy that is rooted in both the live instrumentation of soul music and the technological manufactured sounds of hip-hop.

Striking this sonic balance is one of the challenges the genre faces and artists who opt for a more live sound or a more synthesized sound find themselves closer to the periphery of the genres that represent this sound. As Erykah Badu famously proclaims, she “is an analog girl in a digital world” and striking the balance between these two worlds is what artists who are steeped in the musical traditions of neo-soul are all about.

In addition to the sonic quality and components of neo-soul, the genre is also one that is carried by the lyrical compositions of its artists. While hip-hop soul is a famous fusion of music that is sometimes conflated with neo-soul, artists within the confines of this music often set themselves apart from neo-soul artists due to the paucity of songwriting credits on their résumé as well as the abundance of synthesized beats. This is not to say that one genre is “better” than the other, but that they each exist in different spheres and planes of similar musical discourse.

Neo-soul artists, as Jill Scott so eloquently pointed out, speak to the realities of life within their self- or co-written compositions including, but not limited to, issues that touch upon the very essence of human experience. While rap music still speaks to lived experiences, the overarching narrative seems to have shifted to a paradigm that speaks largely to male street credibility and the highly commercialized and commoditized male hustler protagonist.

Neo-soul can be seen, lyrically, as a remix of hip-hop – still speaking to the lived experiences of its listeners as hip-hop and soul did and still do – with a slanted female perspective, as the majority of releases within the confines of neo-soul reflect the female voice.

While men and women release music within the confines of the neo-soul and rap genres respectively, it appears that each genre has the disproportionate voice of one sex. Whereas rap music has always been rooted in the male perspective, with relatively few female centered perspectives, neo-soul operates as the contrasting version with the female perspective taking center stage, while the male perspective is given voice with relatively few releases.

Responding to the obvious exclusion of female voices within hip-hop, neo-soul artists find themselves oftentimes engaging with messages perpetuated within hip-hop and mass media in an attempt to recreate and reclaim those representations of Black womanhood.

Another interesting observation of the discography released within neo-soul finds that Black artists have released the majority, if not all, of the releases considered neo-soul. Though white British soul artists such as Amy Winehouse, Joss Stone, Adele, and Duffy have all released albums and/or songs that would aptly be described as neo-soul, due to their sonic and lyrical arrangements, these women are placed under the banner of pop or British soul instead of neo-soul. Jaguar Wright for one has pointed out in her observation of the musical genre the racialized space that has been built around this marketing genre.

Through the racialization of neo-soul, these artists are able to engage in visual and musical critiques of issues impacting Black communities, such as Jill Scott’s powerful analysis of the state of Black communities in her song “My Petition,” which is lifted from her 2004 album Beautifully Human: Words and Sounds Vol. 2.

Ultimately, neo-soul is a genre that is still alive and well though the glare of mainstream press and platinum selling singles and album sales has wavered. Before one engages with the theorizing of “alternative R&B,” it is important to revisit and reengage with the visual and musical discourse that is the genre neo-soul.

Marcus C. Shepard is a Ph.D. student at USC Annenberg School for Communication and Journalism. His work explores Black musical performance and its intersections and transformative capabilities of race, class, gender and sexuality. Specifically, he focuses on the musical genre neo-soul and its sonic, visual and political implications in the United States within communities of color. Shepard has also worked at the world famous Apollo Theater in Harlem as an archivist and maintains his ties to this artistic community.

Solidarity Might be for White Women, but it isn’t for Feminists

Solidarity Might be for White Women, but it isn’t for Feminists

                                              By Nikita Hamilton


In early August, the hashtag #SolidarityIsForWhiteWomen sparked an internet-wide conversation about feminism, intersectionality and inclusion after Mikki Kendal coined the term in her response to tweets that were to and from Hugo Schwyzer, a professor at Pasadena City College. Schwyzer had just gone on an hour-long Twitter rant in which he admitted to leaving women of color out of feminism, and later apologized for it. He then received sympathetic Twitter responses that moved Kendal to tweet “#SolidarityIsForWhiteWomen when the mental heath & future prospects for @hugoschwyzer are more important than the damage he did.” She felt that women of color were, and are, continuously left out of feminism and that Schwyzer was another example of that exclusion.

Though this is a necessary discussion, what is most interesting about it is that it’s a conversation that started decades ago and has just never come to a resolution. The inclusion of women of color has been an issue from the very beginnings of first-wave feminism and we are simply at another iteration of the same discussion. When white middle-class women wanted to fight for the right to go out into a workforce that Black, Asian, and Hispanic women had already been a part of for decades, if not hundreds of years, they all realized that there would be a continued disconnect. How could there not be when some of these women came from generations of working and slaving women or generations of woman that had been working side by side with the men of their race?

In a recent NPR Code Switch article, Linsay Yoo asked about which women were included in the term “women of color,” and advocated for the inclusion of Asian and Hispanic women. Her inquiries and points made sense since Asian and Hispanic women are also marginalized and often left out of feminism. However, in addition to noticing the continued omission of Arab women from the term “women of color” by each other these commentators, I was left with the question, “what do people mean when they say that they want solidarity?” Furthermore, what would this solidarity look like and what are its desired consequences? I believe that this is the question that feminists are really failing to answer.

Mikki Kendall wrote, “Solidarity is a small word for a broad concept; sharing one aspect of our identity with someone else doesn’t mean we’ll have the same goals, or even the same ideas of community.” Kendall’s definition sends feminists in the direction that they need to go undoubtedly, but the word “solidarity” itself is the problem. Solidarity implies equality and that is not present in the feminist movement or society at large. We live in world that stratifies people by their gender, race, sexuality and class. It is quite possible that expectation of equality that comes from a word such as “solidarity” that is the snowball, which then turns into an avalanche of problems and disagreements. Therefore, it is time to find another label and it is time to have a very honest conversation among all feminists, both those who feel included and excluded from the movement, about how structural inequalities based upon on color, sexuality and socioeconomic status have to be taken into consideration along with gendered issues.

Of course there are some key issues that are affecting all women because they are biologically female. The attack on women’s bodies by the government, women’s healthcare, violence and sexual assault are all topics that feminists can agree need to be at the forefront of the women’s movement. However, depending on the race, for example, the order of those topics importance shifts. For example, the 2000 US Department of Justice survey on intimate partner violence uncovered that inter-partner violence was particularly salient for Hispanic women because they “were significantly more likely than non-Hispanic women to report that they were raped by a current or former intimate partner at some time in their lifetime.” For black women, sexual assault is a leading issue.  According to the Rape, Abuse and Incest National Network (RAINN), the lifetime rate of rape and attempted rape for all women is 17.6% while it is 18.8% for black women specifically. There can be consensus on what the issues are, but there also needs to be acceptance of differences, inequalities and the desire for differing prioritizations. Why can’t feminism be a movement of consensus on the overarching issues that affect women that also houses Third World and black feminists’ respective prioritized concerns? Why can’t each group be a wall under the roof of feminism that provides support, but consists of different activities in each room of the house?

The Women’s Movement is still needed, but as history has exemplified over and over again solidarity is not what can, or needs to be, achieved presently. Solidarity is defined as a “community of feelings, purposes, etc.,” and the idea of “community” connotes an equality that is not yet present among all of the women of the feminism. A better word may be “consensus,” which means “majority of opinion” or “general agreement” because feminists can all agree that there are some overarching feminist issues. Either way, the point is that we set ourselves up for failure every time we sit at the table and come to realize that once the initial layer of women’s issues is peeled back there are too many differences left bare and unacknowledged in the name of a non-existent “solidarity.” Solidarity IS for white women, and for black women, and for Asian women, and for Hispanic women and for Arab women. Consensus is for feminists. Let’s finally move forward.

Nikita Hamilton is a doctoral student at USC’s Annenberg School for Communication and Journalism. Her research interests include gender, race, stereotypes, feminism, film and popular culture.

The Other Media Revolution

This is another in the series of posts from students in my PhD level seminar on the Public Intellectual, which I am teaching this term through the USC Annenberg School of Communication and Journalism. 


The Other Media Revolution

by Mark Hannah


I’ve long blogged about the so-called “digital media revolution.”  Yet, deploying digital media to praise digital media has always struck me as a bit self-congratulatory.  Socrates, in the Gorgias dialogues, accuses orators of flattering their audiences in order to persuade them.  This may be the effect, even if it’s not the intention, of blogging enthusiastically about blogging.

To be sure, a meaningful and consequential revolution of our media universe is underway.  This revolution’s technological front has been well chronicled and analyzed (and is represented) by this blog and others like it.  The revolution’s economic front – specifically, the global transformation of media systems from statist to capitalist models – has, I think, been critically underappreciated.


What Sprung the Arab Spring?

How attributable is the Arab Spring to Twitter and Facebook, really?  After a wave of news commentary and academic research that have back-patted western social media companies, some observers now question how much credit digital media truly deserve for engendering social movements.  It’s undeniable that the Internet does, in fact, provide a relatively autonomous space for interaction and mobilization, and that revolutionary ideas have a new vehicle for diffusing throughout a population.  But the salience of these revolutionary ideas may have its origin in other media that are more prevalent in the daily life of ordinary Arab citizens.

With limited Internet access but high satellite TV penetration throughout much of the Arab world, the proliferation of privately owned television networks may, in fact, have been more responsible for creating the kind of cosmopolitan attitudes and democratic mindset that were foundational for popular uprisings in that region.

Authoritarian regimes are sensitive to this phenomenon and, as my colleague Philip Seib points out, Hosni Mubarak responded to protests early on in the Egyptian revolution by pulling the plug on private broadcasters like ON-TV and Dream-TV, preventing them from airing their regular broadcasts.  Of the more than 500 satellite TV channels in the region (more than two-thirds of which are now privately owned!), Al-Jazeera and Al-Arabiya are two news networks that have redefined Middle Eastern journalim and enjoy broad, pan-Arab influence.

The Internet, which represents technological progress and individual interaction, may have emerged as a powerful symbol of democratic protests in the Arab world even while “old media,” with their new (relative) independence from government coercion may be more responsible for planting the seeds of those protests.


America Online? Cultural Exchange On and Off the Web

Is YouTube really exporting American culture abroad?  The prevailing wisdom, fueled by a mix of empirical research and a culture of enthusiasm for digital media, is that the global nature of the Web has opened up creative content for sharing with new international audiences.  Yet, in light of restrictive censorship laws and media consumers’ homophilic tendencies, we may be overstating the broad multicultural exchange that has resulted.

What has signficantly increased the influence of American cultural products, however, is the liberalization of entertainment markets internationally.  As international trade barriers loosen, Hollywood films are pouring into foreign countries.  Just last year, China relaxed its restrictions on imported films, now allowing twenty imported films per year (most of which come from the United States).  This freer trade model, combined with the dramatic expansion of the movie theater market in China (American film studios can expect to generate $20 – $40 million per film these days, as opposed to $1 million per film ten years ago) is a boon for America’s cross-cultural influence in China.

It’s true that rampant piracy, enabled by digital technologies, further increases the reach and influence of American movies and music.  To the extent that the demand for pirated cultural products may be driven by the promotional activity of film studios or record labels, this practice may be seen more as an (illegal) extension of new international trade activity than as a natural extension of any multicultural exchange occuring online.

The cultural influence of trade doesn’t just move in one direction though.  As Michael Lynton, CEO of Sony Pictures Entertainment, insisted in a Wall Street Journal op-ed, economic globalization is as much responsible for bringing other cultures to Hollywood as it is for bringing Hollywood to other cultures.

Put otherwise, media systems are both the cause and the effect of culture.


The Cycle of Cultural Production & Consumption

To use a concept from sociology, media are performative. They enable new social solidarities, create new constituencies and, in some cases, even redefine political participation.  Nothing sows the idea of political dissent like the spectacle of an opposition leader publicly criticizing the a country’s leader on an independent television channel.  And, on some level, nothing creates a sense of individual economic agency like widespread television advertisements for Adidas and Nike sneakers, competing for the viewer’s preference.

Sociologists also discuss the “embeddedness” of markets within social and political contexts. From this angle, the proliferation of commercial broadcasters and media liberalization are enabled by the kind of social and political progress that they, in turn, spur.

Despite the above examples of how the media universe’s new economic models are transforming public opinion and cultural identity, we remain transfixed on the new technological models, the digital media revolution.  It’s perhaps understandable that reports of deregulation and trade agreements often take a back seat to the more trendy tales of the Internet’s global impact. The Internet is, after all, a uniform and universal medium and the causes and consequences of its introduction to different parts of the world are easily imagined.

In contrast, the increased privatization of media, while a global phenomenon, is constituted differently in different national contexts.  The private ownership of newspapers in the formerly Communist countries of Eastern Europe looks different than the multinational conglomerates that own television channels in Latin America.  Like globalization itself, this global phenomenon is being expressed in variegated and culturally situated ways.

Finally, the story of this “other” media revolution is also a bit counterintuitive to an American audience, which readily identifies the Internet as an empowering and democratizing medium, but has a different experience domestically with the commercialization of news journalism.  We haven’t confronted an autocratic state-run media environment and our commercial media don’t always live up to the high ideals of American journalism.  To a country like ours, which has grown accustomed to an independent press, it’s not always easy to see, as our founders once did, the potential of a free market of ideas (and creative content) as a foundation for independent thought, democratic participation, and cultural identity.


Mark Hannah is a doctoral student at USC’s Annenberg School for Communication, where he studies the political and cultural consequences of the transformation of media systems internationally. A former political correspondent for PBS’s MediaShift blog, Mark has been a staffer on two presidential campaigns and a digital media strategist at Edelman PR.

Non-Conforming Americans: Genre, Race, and Creativity in Popular Music

This is another in a series of posts by the PhD students in the Public Intellectuals seminar I am teaching through the Annenberg School of Communication and Journalism.


Non-conforming Americans: Genre, Race, and Creativity in Popular Music
by Rebecca Johnson

Papa bear and mama bear. One was Black, and one was white, and from day one, they filled our lower eastside Manhattan apartment with sound. Beautiful sounds, organized and arranged, and from song to song they would change in shape, tempo, rhythm, amplitude, voice, instrument and narrative. Sometimes those sounds would come from my father’s guitar playing in the living room, and sometimes, even simultaneously, from my bedroom, where I could often be found recording a newly penned song. But most of the time, those sounds came from speakers.

Sometimes my father would put on this:

And the next day he’d play this:

Sometimes my mother would put on this:

And the next day she would play this:

And sometimes, they would both put on this:

The wide array of sounds that I heard was not just limited to the inner walls of my downtown apartment though. They echoed as I left home every morning to take the train to my upper eastside magnet school, and as I entered that school every day and saw diverse colors in the faces of my classmates, and when I visited my father’s parents in Brooklyn, and then my mother’s parents on Long Island. The sounds I heard became interwoven in my identity, song by song, strand by strand, they became my musical DNA. As I got older, I learned how those sounds came together, replicated, and mutated in the history of a world much bigger than myself. I discovered how those sounds had changed over time, often through deletions and erasures, but also how they had evolved because of the insertion of something new, something extra.

The sounds I grew up with were all part of the history of popular music in America. Crucial to the trajectory of that history has been the interactions between African Americans and white Americans. The complicated relationship between these two collectivities has informed much of the way we as a society understand how music helps to shape and influence our identities, and how we understand and perceive different styles, or genres, of music themselves. This post aims to explore how these understandings were formed over time, and how they can be reimagined, and challenged, in the digital age.

Popular music in America from its start was not just about sound, but racialized sound. From the moment African sounds were forcibly brought to America in the hands and mouths and hearts and minds of Black slaves, they were marked as different and in opposition to the European sounds that were already at work in forming the new nation. And yet through music slaves found a positive means of expression, identification and resistance where otherwise they were denied. While they would eventually be freed, the mark of slavery never left the African American sound and would be used to label Black music as different and not quite equal. As Karl Hagstrom Miller writes in Segregating Sound, in the 1880s music began to move from something we enjoyed as culture to something we also packaged, marketed, and sold as a business. It was then that “a color line,” or “audio-racial imagination” as music scholar Josh Kun calls it, would become deeply ingrained in much of our understanding of sound and identity (of course, that line had long existed, but it was intensified and became part of the backbone the industry was built on).

The color line that still runs through music today was in part cemented through the establishment of genres. The function of genre is to categorize and define music, creating boundaries for what different styles of music should and should not sound like, as well as dictating who should be playing and listening to certain types of music and who should not (for example, based on race, gender, class). The essential word here is “should.”

The racial segregation at the time the music business was taking baby steps played a large role in genre segregation. The separate selling of white and Black “race” records by music companies in the early 1900s assumed a link between taste and (racial) identity. Recorded music meant that listeners could not always tell if the artist they were hearing was white or Black, and thus it became the job of the music and its marketing to do so. Genres of music are living and constantly evolving organisms that feed off of input from musicians, listeners, scholars and key industry players such as music publishers and record labels. The contemporary use of them is a result of this early segmentation.

A selective, condensed timeline of genre segregation goes something like this. In the early twentieth century many white and Black musicians in the South played the same hillbilly music (including together), which combined the styles of “blues, jazz, old-time fiddle music, and Tin Pan Alley pop.” During this period the best way for musicians to make money was often to be able to perform music that could appeal to both Black and white listeners. Yet hillbilly grew into Country music, a genre first and foremost defined by its whiteness and the way that it helps to construct the idea of what being “white” means. Jump to the age of rock ‘n’ roll, and you find the contributions of Black musicians being appropriated (borrowed, or stolen) and overshadowed by white musicians. If we fast-forward once again to the 1980’s, Black DJs in Chicago and Detroit were creating and playing styles such as house and techno, while white DJs in the U.S. and across the pond in the United Kingdom were simultaneously contributing to the development of electronic music. Yet today, the overarching genre of electronic music, and its numerous subgenres, is commonly known to be a style of music created and played by white DJs.

The whiteness that came to define many genres meant to a degree erasing the blackness that also existed in them. Styles such as country and hip-hop have become important markers of social identity and cultural practices far beyond mere sound. At the same time, the rules that have come to define many genres have erased the hybrid origins of much of American popular music. They erase the fact that Blacks and whites, for better or worse, have lived, created and shared music side by side. And, they have created identities side by side, in response and through interactions with one another.

But, what if your identity as a listener or musician doesn’t fit into any of categories provided? What if the music you like, and the identity you’re constantly in the process of forming, goes something like this:

Unlocking The Truth – Malcolm Brickhouse & Jarad Dawkins from The Avant/Garde Diaries on Vimeo.

What if you are not as confident as Malcolm and Jarad, and you are faced with a world telling you that your interests lay in the wrong place, because you are the wrong race? What if you’re like me, with skin and an identity created by the bonding of two different worlds, but you are bombarded with messages that tell you life is either about picking one, or figuring out how to navigate between the two?

Popular music is a space in which identities are imagined, formed, represented and contested. Racialized genre categories that function within the commercial market have not only restricted the ability of musicians to freely and creatively make (and sell) music, but have also impacted the ability of listeners to freely consume.

Take this encounter, for example:

While humorous, it is also realistic. In the privacy of homes (or cars), in the safe space of like-minded individuals, and through the headphones attached to portable music players, consumption has always crossed racial lines. In the public arena though, Black listeners, just like Black musicians, have not had the same affordances as white listeners and white musicians to express and participate in non-conformity. This does not erase the fact that these non-conforming consumers exist, or that they have power. The boundaries of genre and the identities they reflect and produce are imposed from the top down and developed from the bottom up. The old institutions will try to exercise and maintain control in the digital age where they can (just as in the physical world), but our current media environment is more consumer driven than ever. Consumers now want to pull or seek out music, rather than having it solely pushed on them.

We are in a period of transformation. The Internet and new digital technologies have forever altered the landscape of the music industry. The traditional gatekeepers might not be interested in taking risks as they determine how to survive in the digital age, but they no longer hold all of the power. Digital distribution creates new opportunities to have music widely heard outside of established structures. Once revered music critics, many of whom contribute(d) to and reinforce(d) racialized genre boundaries, are now faced with competition from music recommendation websites, music blogs, amateur reviewers and more. In the past radio might have been the medium that enabled artists to spread their music, but the Internet is quickly coming for that title. In an attempt to more properly reflect consumption by listeners, the music charts in Billboard magazine have now begun including YouTube plays and Internet streaming into its calculations. Potentially, anyone can make it to the top.

The current moment is providing the opportunity to complicate our understanding of racialized sound. The dominant conceptions of what it means to be Black, what it means to be white, and what it means to create music through those identities are being challenged.

Like this:

Splitting from genre tradition does not have to erase social identities and histories; it can allow music to expand and breathe. The remix and mashup practices of both hip-hop and electronic music have demonstrated how boundaries and identities can be reimagined in a way that recognizes points of similarity, but also celebrates difference.

Doing so is how we get songs such as this:

And artists who make music like this:

Holes are increasingly punched into the lines drawn to bound music and culture. The racial and ethnic makeup of America is changing. From musicians challenging stereotypes in the studio and on the stage, to series such as NPR’s “When our kids own America,” people are taking notice. The mutations in popular music that led to its evolution have always been about looking back, looking left, and looking right in order to look forward. The digital revolution is about providing the tools to make this happen.

Rebecca Johnson is a songwriter and Ph.D. student studying the commodification of American popular music in the digital age at the USC Annenberg School for Communication and Journalism. Her work explores how music is produced, marketed, distributed and consumed in a constantly changing, technologically driven, and globally connected world.

Work/Life Balance as Women’s Labor

This is another in a series of blog posts produced by students in my Public Intellectuals seminar.

Worklife Balance as Women’s Labor
by Tisha Dejmanee

When Anne-Marie Slaughter’s 2012 Atlantic article “Why Women Still Can’t Have It All” came out, I have to admit I was crushed. Knowing that I want both a career and family in my future, Slaughter’s advice was demoralising. However, what upset me more was her scapegoating of the feminist movement as a way of rationalising her own disappointment. This led me to explore the continuing unsatisfactory support faced by parents in the workplace, as well as the injustices inherent in the public framing of worklife balance.

Worklife balance is a catchphrase endemic to contemporary life. Despite its ambiguous nomenclature and holistic connotations, worklife balance is a problem created for and directed towards professional, middle-class women with children. Exploring the way this concept has captured the social imaginary reveals that the myth of equality and meritocracy persists in spite of evidence that structural inequalities continue to perpetuate social injustices by gender, race, class and sexuality. It also exposes the continuing demise of second wave feminism in favour of narratives of retreatism, the trend for women to leave the workforce and return home to embrace conservative gender roles.

The circulation of worklife balance as a women’s issue is only logical in an environment where the private and public spheres remain sharply divided. While gendered subjects may traverse between the two spheres, they maintain primary, gendered obligations to one sphere. Traditionally, men have occupied the public sphere while women occupied the private sphere. The work-life strain that we currently see is a remnant from the battle middle-class women fought in the 60s and 70s, as part of the agenda of liberal feminism, to gain equal access to the ‘masculine’ public sphere. This access has been framed as a privilege for women who, through the 80s – with the enduring icon of the high-paced ‘superwoman’ who managed both family and career – until the present have been required to posture as masculine to be taken seriously in the public sphere, while maintaining their naturalised, primary responsibilities within the private sphere.

The unsustainability of such a system is inevitable: Women must work harder to show their commitment to the workplace, in order to fight off the assumption that their place is still in the home. The gravitational pull of domesticity and its chores remain applicable only to women, whose success as gendered subjects is still predicated on their ability to keep their house and family in order. Men have little incentive to take on more of the burden of private sphere work, as it is devalued and works to destabilise their inherent male privilege (hence the popular representation of domestic men in ads, as comically incompetent, or worthy of laudatory praise for the smallest domestic contribution). Accordingly, women feel the strain of the opposing forces of private and public labour, which relentlessly threaten to collide yet are required to be kept strictly separated.

guyswithkidsMoreover, worklife balance is regarded as an issue that specifically pertains to mothers, because while self-care, relationships with friends and romantic relationships might be desirable, all of these things can ultimately be sacrificed for work. Motherhood remains sacred in our society, and due to the biological mechanisms of pregnancy, is naturalised both in the concept of woman and in the successful gendered performance of femininity. This explains the stubborn social refusal to acknowledge child-free women as anything but deviant, and the delighted novelty with which stay-at-home dads are regarded which has been popularised in cultural texts, for example by the NBC sitcom “Guys with Kids” or the A&E reality television show “Modern Dad” that follows the lives of four stay-at-home dads living in Austin, Texas.

Photo of Crossfit Session by Pregnant Woman that Caused Internet to Explode

Photo of Crossfit session by pregnant woman that caused internet to explode

Motherhood heightens worklife balance in two particular ways: Firstly, it exacerbates the difficulties of attaining success in work and at home, because the demands set for mothers in contemporary life are becoming increasingly, irreverently, high. Fear has always been used to motivate mothers, who are blamed for everything from physical defects that occur in-utero to the social problems which may affect children in later life, as can be seen from the furore that ensued when a mother posted this photo of herself doing a crossfit workout while 8 months pregnant with her third child. However, motherhood has become a site that demands constant attention, a trend that Susan Douglas and Meredith Michaels call ‘new-momism’ ‘the new ideal of a mom as a transcendent and ideal woman who must “devote … her entire physical, psychological, emotional, and intellectual being, 24/7, to her children” (The Mommy Myth 2004, p. 4).

Secondly, motherhood physically and symbolically hinders the motility of women across the boundary from private to public as it reinforces the materiality of female embodiment. Women’s bodies are marked by the fertility clock, imagined in popular rhetoric as a timebomb that threatens to explode at approximately the same time as educated women are experiencing major promotions in their careers, and provides a physical, ‘biological’ barrier to accompany the limit of the glass ceiling. If infants are miraculously born of the barren, middle-aged professional woman’s body, they are imagined in abject terms – hanging off the breasts of their mothers while their own small, chaotic bodies threaten disruption and chaos to the strictly scheduled, sanitized bureaucratic space of the office. This is the infiltration of home life that is epitomised when Sarah Jessica Parker, playing an investment banker with two small children, comes to work with pancake batter (which could just have easily been substituted with baby vomit or various other infant bodily fluids) in the 2011 film I Don’t Know How She Does It.

Public attention to this issue has recently been elevated by the publication of writings from high-profile women, including Anne-Marie Slaughter, Sheryl Sandberg and Harvard professor Radhika Nagpal; speculation about high-profile women such as Marissa Mayer and Tina Fey; and through fictional representations of women such as the film (originally book) I Don’t Know How She Does It, Miranda Hobbes in television show Sex and the City; and Alicia Florrick in television drama The Good Wife.

Returning now to Slaughter’s article, she details her experience by frankly discussing the tensions that arose from managing her high-profile job as the first woman director of policy planning at the State Department. Specifically, the problem was her troubled teenage son who she saw only when she travelled home on weekends. Ultimately, Slaughter decided to return to her tenured position at Princeton after two years: “When people asked why I had left government I explained that I’d come home not only because of Princeton’s rules … but also because of my desire to be with my family and my conclusion that juggling high-level government work with the needs of two teenage boys was not possible.”

While Slaughter remains Professor Emeritus at an ivy league university (and her insinuation in the article that academia is the “soft option” is certainly offensive to others in the academy), she speaks of her experience as a failure of sorts, which is affirmed by the response of other women who seem to dismiss her choice to put family over career. Slaughter sees this as the culture of feminist expectation set for contemporary, educated young women, what Anita Harris would call ‘can-do’ girls: “I’d been part, albeit unwittingly, of making millions of women feel that they are to blame if they cannot manage to rise up the ladder as fast as men and also have a family and an active home life (and be thin and beautiful to boot).” In making this claim, Slaughter becomes a pantsuit wearing, professional spokesperson for retreatism.

Diane Negra discusses retreatism in her 2009 book, What a Girl Wants?: Fantasizing the Reclamation of Self in Postfeminism. Negra describes retreatism as ‘the pleasure and comfort of (re)claiming an identity uncomplicated by gender politics, postmodernism, or institutional critique’ (2010, p. 2). She describes a common narrative trope wherein ‘the postfeminist subject is represented as having lost herself but then [(re)achieves] stability through romance, de-aging, a makeover, by giving up paid work, or by ‘coming home’ (2010, p. 5). Retreatism takes cultural form through shepherding working women back into the home using the rhetoric of choice, wherein the second wave slogan of choice is subverted to justify the adoption of conservative gender positions. Retreatism is also reinforced through the glamorisation of hegemonically feminine rituals such as wedding culture, domestic activities such as baking and crafting, motherhood and girlie culture.

In keeping with the retreatist narrative, the personal crisis Slaughter faces is the problematic behaviour of her son, and ultimately she decides that she would rather move home to be with her family full-time rather than continue her prestigious State job. While this personal decision is one that should be accepted with compassion and respect, Slaughter uses this narrative to implicate the feminist movement as the source of blame: “Women of my generation have clung to the feminist credo we were raised with, even as our ranks have been steadily thinned by unresolvable tensions between family and career, because we are determined not to drop the flag for the next generation. But when many members of the younger generation have stopped listening, on the grounds that glibly repeating ‘you can have it all’ is simply airbrushing reality, it is time to talk.”

This backlash against feminism is not nearly as novel nor fair as Slaughter suggests, but it does uncover (as I suggested earlier) the continuing scapegoating of the feminist movement as the source of women’s stress and unhappiness, in preference to addressing the rigid structural and organisational inequalities that require women to stretch themselves thin. Slaughter does not acknowledge the personal benefits she has received from the feminist movement and women who helped pave the way for her success, and in doing so contributes to the postfeminist belief that second wave feminism is ‘done’ and irrelevant – even harmful – in the current era.

Slaughter suggests that women loathe to admit that they like being at home, which more than anything reveals the very limited social circle that she inhabits and addresses. Retreatism glorifies the home environment, and this new domesticity – as stylised as it is mythical – is the logical conclusion to Slaughter’s assertion that women cannot have it all. Moreover, men become the heroes in this framing of the problem, celebrated not so much for their support in overturning structural inequalities, but for their ‘willingness’ to pick up the slack – typically comprising an equal share of the labour – around the home.

What bewilders me most about this account is Slaughter’s need to discredit the feminist movement. Without trivialising the personal decisions made by Slaughter and many other women in the negotiation of work and childcare, at some point the glaring trend towards retreatism must be considered as more than a collection of individual women’s choices: It is a clue that systematic, institutionalised gender inequality continues to permeate the organisation of work and the family unit.

Slaughter points out the double standard in allowing men religious time off that is respected, but not having the same regard for women taking time off to care for their families; the difference in attitude towards the discipline of the marathon runner versus the discipline of the organised working mother. To me, this does not indicate a failure of feminism – it suggests that feminism has not yet gone far enough. However, yet again the productive anger that feminism bestowed upon us has been redirected to become anger channelled at feminism, taking away the opportunity to talk about systematic failures in the separation of private and personal life, and their continued gendered connotations.

Slaughter’s opinion, although considered, is not the final word on this issue. There are many unanswered questions that arise from her article, including:

  • How we might understand the craving for worklife balance as a gender-neutral response to the upheaval of working conditions in the current economic, technological and cultural moment
  • How technology and the economy are encouraging ever more fusions between the personal and the private, and what the advantages and disadvantages of such mergers might be
  • How to talk about worklife balance for the working classes, whose voices on this matter are sorely needed
  • How to talk about women in a way that is not solely predicated on their roles as a caregivers to others

We need people from many different perspectives to share their experiences and contribute to this discussion, and I invite you to do so in the comments below.

Tisha Dejmanee is a doctoral student in Communication at the University of Southern California. Her research interests lie at the interface of feminist theory and digital technologies, particularly postfeminist representations in the media; conceptualising online embodiment; practices of food blogging and digital labour practices.