Unspreadable Media (Part Four): The Broadcast Ghost

The Broadcast Ghost: The Persistent Logic of Traditional Media Industries Metrics

by Sam Ford

NOTE: Portions of this piece expand on issues I explore in “Public Diplomacy’s (Misunderstood) Digital Platform Problem,” written as part of the U.S. Department of State Advisory Commission on Public Diplomacy’s May 2017 report, Can Public Diplomacy Survive the Internet?: Bots, Echo Chambers, and Disinformation, edited by Shawn Powers and Markos Kounalakis.

Last spring, I had the great pleasure to convene with a fantastic set of colleagues in Atlanta for the Society for Cinema and Media Studies’ annual gathering to hold a panel on the topic of “unspreadable media.” Alongside my Spreadable Media co-author Henry Jenkins, our esteemed host for this exchange as well, it was in part a chance to revisit our book in light of the directions our career paths have taken since then. And I had the great pleasure of getting a chance to speak for the first time alongside the fabulous Lauren Berliner, whose dissertation chapter on the “It Gets Better” campaign for LGBTQ youth provided some very productive critiques of campaigns focused heavily on spreadability of a message, and Leah Shafer, with whom I share a passion for carving out space for talking about pedagogy at academic conferences that too often undervalue it.

Without concentrated coordination, we found so many parallels across our presentations that we decided to take to the digital sphere to share our work and find some way to continue this discussion of “unspreadable media.”

The larger theme of this piece is simple: The “broadcast ghost” haunts my work. By that, I mean the lingering legacy of how media companies and marketers structured their business models in a previous era continues to heavily shape how they continue to approach success in a digital world. From my academic work in media studies/fan studies, to my consulting in the marketing and corporate communication space, to my work in the media industries, the most persistent frustration I run into—and sometimes find myself guilty of—is too easily letting the logics that defined the broadcasting world continue to persist, even when and where their logic has long become outmoded.

The Overvaluation of Spreadability

Back in 2014, Henry and I joined a fantastic panel of academics for a roundtable published in full in Transformative Works and Cultures, and excerpted for Cinema Journal, focused on discussing issues raised by Spreadable Media.

Melissa Click, one of my colleagues in the roundtable, writes, “So to be tongue in cheek with the phrase, are people who don't spread dead? I absolutely value Spreadable Media's recognition of everyday activities online and its suggestion that often we are active and passive in different places/times online. I think that's right on—but I'm concerned about reproducing a hierarchy between ‘folks who spread and those who don't.’”

In that roundtable exchange, I responded:

However, this question of whether people, and texts, are dead if they don't spread is one that is a vital corrective toward any overvaluation of spreadability. Despite the pithiness of the "if it doesn't spread, it's dead" mantra…we have to push back against that statement's oversimplification (and hope that we did, through the course of the book). One of my primary goals in cowriting this book was to challenge the notion that producing "texts" should somehow become the definition of participation and push back against the belief that less visible activities of sharing and circulation can somehow be defined as passive audience practice. In response, though, we also can't define value solely in terms of spreadability.

As Melissa points out in our roundtable exchange, we don't want to create new hierarchies that say that audience members who spread texts—or who spread them via certain (online, "surveillable," "monitizable") ways—are somehow more important than those who don't, or whose means of circulating are less visible. And, as Paul Booth writes in our exchange, “If the designers of technology are enabling the grassroots spreadability, is it a concept rooted in manufactured interactivity? Are we swapping the freedom to print whatever the hell we want in a zine and pass it out to 100 people for the ability to repost a clip of American Idol or Breaking Bad to a thousand?”

And, of course, we don’t want to forget that people are apt to have different types of relationships with different types of media texts, even those that they are intensely passionate about. People may find great value in reading their e-mail, watching pornography, or listening to international news but be more likely to share on Facebook that clip of American Idol or Breaking Bad that Paul writes about. While the former texts may have strong individual engagement, the latter may be perceived as having a greater degree of cultural/social value. We don't want to risk conflating the two.

The Overvaluation of Breadth

But, beyond concerns about prioritizing spreadability too heavily over other aspects of participation in and around media texts, there’s also the matter of what we prioritize within spreadability. Since the release of Spreadable Media, I’ve been concerned that our opening statement’s logic—taken too simply—implies success should be defined primarily by breadth…by reach…by impressions. Our goal was to argue for much more than that. Yet, in our desire to demonstrate that audience driven acts of circulation can often reach or exceed the traditional distribution capabilities of large media corporations, we perhaps made it too easy for media and marketing industries readers to focus on the spreadable soundbite opening phrase and ignore more nuanced points.

Perhaps no better illustration of this phenomenon is the ways in which many reactions to the book focused most heavily on replacing the word “viral” with the word “spreadable,” most heavily in the marketing/advertising/corporate communication space.

(Images above from here, here, here, here, and here.)

At its core, that debate of replacing “viral” with “spreadable” drove interesting discussion. It caused people to reflect on what the viral metaphor actually meant, and how it might be shaping their thinking. But, like Lurpak’s supposedly spreadable butter, it also too often felt like a “spreadable failure” to me. As I consulted in the marketing world, I watched some people switch out the terminology while forging ahead, emboldened by the same practice of valuing breadth/reach over all else.

But spreading widely is not the only way that material can be highly spreadable. Indeed, the ghost of broadcast haunts the media industries more broadly.

In Spreadable Media, we write about a 2007 example where CBS’ mantra of viewing “everywhere, anywhere” conflicted with statements from the network and from the producers of Jericho that the only way to “save the show” from cancellation was to watch live, because the industry’s continued reliance on having an immediate broad audience, in a way that could easily be counted by Nielsen ratings, was still the primary way they knew how to value a show. Ten years later, at the time I’m writing this, a letter from one of the creators of NBC’s Timeless is circulating, arguing that the only way to save the show from cancellation might be to “Watch live.” Even as Netflix introduces new models for thinking about how to value media texts over time rather than based on immediate reach (For more on that, see Amanda Lotz’s new treatise, Portals.), the problem still persists that media companies need audiences to bend to their inability to shift their business models to current reality, rather than the other way around, to keep shows which have audiences from getting cancelled.

The situation is especially dire in the digital publishing world. In his 2014 piece for The Atlantic called “The Internet’s Original Sin,” Ethan Zuckerman looks at how the longstanding logic of advertising-supported models have thoroughly seeped into supposedly “new” media industries. Writes Ethan:

I have come to believe that advertising is the original sin of the web. The fallen state of our Internet is a direct, if unintentional, consequence of choosing advertising as the default model to support online content and services. Through successive rounds of innovation and investor storytime, we’ve trained Internet users to expect that everything they say and do online will be aggregated into profiles (which they cannot review, challenge, or change) that shape both what ads and what content they see.

In particular, Ethan says that online advertising “creates incentives to produce and share content that generates pageviews and mouse clicks, but little thoughtful engagement.”

Reflecting on My Experiences in the Media & Marketing Industries

For most of the past two years, I worked at Univision/Fusion Media Group, running a group called the Center for Innovation & Engagement, as VP, Innovation & Engagement. In that time, speaking with executives across a broad range of digital journalism sites, I saw a common refrain across the industry—a continued need to sell based on breadth and reach. To even get into the game of getting programmatic digital advertising buys, publications had certain traffic thresholds they needed to hit. So business models tend to set monthly goals for unique visitors and pageviews and then measure daily against hitting those goals. The result is websites focused on headlines that entice as many people as possible to click on an article, or share it. As media companies invest more in creating original video for their social media channels, they likewise focus on getting as many views as possible.

Nowhere in most of these calculations is a significant focus or accounting for depth of engagement, or bounce rate/completion rate (how quickly someone actually departs from a story or video). Instead, the writers/producers who create the content that gets the most clicks are held up as the exemplars in the newsroom, at the exclusion of other ways of valuing content.

As Lucia Moses at Digiday writes, “Many media plans have an arbitrary cutoff point for participating publishers. So publishers need to show big numbers. But this can in turn rewards tricks to inflate the size of their audiences and make them appear younger than they actually are.” And that leads to all sorts of strategies focused on making that reach number as big as possible.

One approach is for a big media company to handle the advertising inventory for other sites and then count those other sites’ pageviews as part of their ComScore listing for the portfolio of brands. (See, for instance, Brian Steinberg’s Variety piece on Vice from around this time last year, which notes that at the time half of Vice’s online traffic on ComScore doesn’t actually come from Vice.) Another regular tactic is that publishers are paying to promote their stories via promoted posts on social media and content amplification widgets on other publications, aimed at hitting those reach goals. If you can take out ads on your story to drive traffic for a nickel, and then sell that traffic to your advertisers for a dime, then you make a profit in the middle.

We can debate our feelings on these tactics. My point here, though, is that considerable human and financial capital ends up devoted to these types of activities above, to hit those numbers in order to get those programmatic ad buys. And that work comes at the exclusion of something else.

And it does little to build a brand to audiences who engage with those individual stories on a social media site, nor trust in that brand. As my former Fusion colleague Felix Salmon writes:

The result has been a rise in social teams, some of whom concentrate mainly on getting traffic from Facebook to their website, and some of whom concentrate on building stories which live natively within social apps. (That would include Instant Articles on Facebook.) All of this is good for boosting engagement metrics…But it’s not necessarily good for building a brand. In the news business, if you want to build long-term value, then you need to build a robust brand. Traffic is great, as far as it goes, but it isn’t enough. If you have lots of traffic but little brand value, then you can disappear more or less overnight.

When it comes to online video, consider this. If a 1 min. 45 sec. video produced for a publisher’s Facebook page is seen by 6 million people, that’s not only a bragging point and will lead to some cumulative monthly number for the publisher to brag about its overall reach to viewers.

Never mind if the video had, let’s say, an 18% completion rate, meaning that the average of those 6 million views stuck around for 19 seconds. (Many videos published on Facebook that rack up a lot of views have a lower average completion rate than that.)

A mentality built on total reach says, “Even if this isn’t the era of broadcasting anymore, we can reach a mass audience through spreadability.” The reality of the situation indicates rather, “Six million people started to check out our video and on the whole deemed that it wasn’t worth sticking around the less than two minutes before.” That sounds like a stat I would want to bury at the bottom of a report, rather than lead with.

But, as Ethan convincingly argues, these metrics aren’t there for accurate measurement. They are there for “storytime”: for investors and for advertisers. If reach keeps more investors happy, or attracts new ones, all the better. And, if big aggregate numbers can convince advertisers to spend with the right story around those numbers, then publishers will continue along these lines.

Look no further in how newsroom growth is being driven by creating more and more teams for social video, despite headlines from places like Nieman Lab and Poynter, like “Video Isn’t as Popular with Viewers as It Is with Advertisers.”

        In an era where everyone is discussing the proliferation of “fake news,” I believe it’s vital to look at how these industry practices, and this broadcast ghost, is leaving organizations destined to continue solving the wrong problems, and continue selling based on industry logics carrying over from an era where it was much harder to measure beyond broad reach and circulations. In a piece I wrote as part of Harvard Nieman Lab’s 2017 Predictions for Journalism, I argue that not much else matters if the journalism and media world can’t spend this year addressing “our awful metrics”:

The industry is running on metrics that serve no one well, but we continue chugging along because we all equally accept the lie. In the current model, publishers measure what’s easiest to capture, no matter how reflective of real engagement; production budgets go toward things that generate the best numbers for this so-called “reach” or “impressions” or “uniques,” even when they do little to create revenue or build a brand (especially when on platforms the publisher doesn’t own and can barely monetize); and advertisers accept inflated numbers industry-wide and continue putting the most funding behind stories which may have the least ongoing resonance.

Thus, audiences too frequently get served with unmemorable stories thin on nuance, heavy on provoking knee-jerk response, and with misleading packaging that causes them to bail two sentences, or fifteen seconds, into the piece (assuming the company doesn’t actively measure and talk about bounce rates, completion rates, and/or time spent on site).

 Of course, at least media companies have the excuse that they are still making money off the old model. Perhaps the most frustrating part of all is that the broadcast ghost still haunts marketers, organizations, and independent producers outside the media industries that publish media texts. These creators, despite not being saddled with needing to “monetize” that content, still often adopt breadth of views as the metric for success—no matter what the actual goal of their communication might be.

        In a 2016 essay I wrote for Public Relations and Participatory Culture: Fandom, Social Media, and Community Engagement, entitled “Public Relations and the Attempt to Avoid Truly Relating to Our Publics,” I argue that marketers/public relations professionals have in fact embraced these strategies of reach and data to measure success, rather than looking at depth of participation or impact, because—even though their “content strategy” may not be saddled by having to figure out how to make money off their media texts, but rather to get people to meaningfully engage with the stories they produce, they are likewise operating off metrics of success shaped by decades of the “broadcast mentality.” Advertisers determined success based on reach of their ads; PR professionals by the audience numbers of the outlets who covered stories about their company. Even when they have less direct business incentive to remain tethered to the metrics of yesteryear, these industries still gravitate to the impressive numeric aggregation of reach.

These questions even plague organizations focused on social impact. In a piece on “Measuring Success” on the Knight Foundation’s review of its Tech for Engagement initiative, the foundation writes, when considering the challenges of thinking about measuring the success of digital campaigns: “Digital technology generates lots of data sets…What gets counted becomes what counts…Engagement, however, is about being ‘attached, committed, involved and productive.”

In an era of Big Data, all the focus continues to be on the metrics easiest to gather, rather than those most meaningful for the media text.

Concluding Thoughts

The situation media and marketing professionals find ourselves in means a few things.

It means that media organizations’ metrics are not only set up to the detriment of their audiences and to producing the most meaningful material possible. I’d argue that they are set up to the detriment of their own financial success. The question is how much of a crisis mode media companies will have to enter before they find more meaningful ways to measure success.

It means that many stories aren’t being produced, because the metrics of media companies don’t know how to support them. Resources are diverted from stories that may have longevity, in order to continue building teams that are wholly focused on building short, quick, thin, and controversial material.

It means that, even if stories are being produced that may have a long shelf life, the machines that are built to support them too often focus on pushing new things out constantly, rather than supporting the continued shelf life of stories that might be relevant for years to come. In other words, organizations may spend nine months on creating a piece of content and one week on supporting its circulation.

And it means that organizations continue to be driven by getting as many passersby as possible, without much thought to audiences coming to them on purpose, and with purpose. To borrow a line from Nicco Mele, the director of Harvard’s Shorenstein Center, how do media organizations focus on a CRM approach rather than a broad reach approach?

My own thoughts are now on the types of spreadability beyond “spreading wide.” Perhaps the kernel of the answer lies in a provocation from Jonathan Groves I often return to, from 2014, on the importance of valuing the longevity of content.

How do media companies and other organizations prioritize models for content that might spread deeply within a particular community but not outside it? Often, the content that resonates most deeply with people is one specifically for them, that may mean little beyond their group. And, in return, the content that means the most within a community might be that which the community actively does not want to have spread beyond its original context.

How do organizations create metrics for success that prioritize content that spreads over time, rather than focusing on how accelerated the spread is? When organizations focus on what spread widely quickly, they lose out on material with a long shelf life that may eventually gain breadth, but through sustained traffic?

How do organizations consider content that might spread through its residual value—particularly stories that may actually be more valuable later than when they are first published? Media companies in particular do not have processes well-suited to resurfacing stories from the past that gain newfound relevance. In fact, if that happens, it’s often due to the ad hoc institutional memory of an individual who happens to be following the cultural/news trends of the moment and also happens to remember a story from months ago or yesteryear that ties into it—meaning lots of valuable content remains buried in the archive. In industry parlance, “lots of money is being left on the table.”

And how do organizations better value content that “spread drillability,” even if they don’t spread widely? In other words, how about stories that have a high rate of conversion to driving people deeper into following a publication, exploring a story world, etc.?

When I returned to the world of journalism two years ago, I heard people talking about “viral engines.” Media and marketing professionals are still chasing reach, even as it becomes increasingly transparent how little that often means and even as—in the journalism realm—we are seeing the erosion of trust and the various effects of what chasing those metrics mean for the long-term viability of news brands.

I once told a set of media executives that media companies can’t “make something spread,” but they can make sure something doesn’t spread. Media companies themselves are producing a lot of unspreadable media, because of all that’s wrong with the metrics governing their content producers. In chasing reach, media organizations are producing a lot of ephemeral content whose “spreading” doesn’t have much longevity, and producing far too little of—and supporting even less—the stories whose spreading might have not only ongoing worth to the community, but the potential for ongoing economic value as well, if the business models were optimized to support it. 

Sam Ford consults and manages projects with leadership teams in journalism, media/entertainment, academia, civic engagement, and marketing/communication. In addition, he is lead producer of the MIT Open Documentary Lab s Future of Work initiative and a co-founder of the Artisanal Economies Project  Sam serves as a research affiliate with MIT’s Program in Comparative Media Studies/Writing and as an instructor in Western Kentucky University’s Popular Culture Studies Program. He writes on innovation in the media industries, fan cultures, immersive storytelling, audience engagement, and media ethics. Sam co-authored, with Henry Jenkins and Joshua Green, the 2013 NYU Press book Spreadable Media: Creating Value and Meaning in a Networked Culture.  In 2015, he launched and ran the Center for Innovation & Engagement at Univision’s Fusion Media Group (as FMG’s VP, Innovation & Engagement), which he ran through the end of 2016. He has also been a contributor toHarvard Business Review, Fast Company, and Inc.