Public Release: 

Driving national discussions

Study shows even smaller media outlets can have wide impact on national conversation 

Harvard University

With a readership that runs into the millions, few would argue that the New York Times doesn't influence public debate on a host of issues. But what about a news outlet with a circulation of only about 50,000?

The answer, says Albert J. Weatherhead III University Professor Gary King, is that even small- to medium-sized media outlets can have a dramatic effect on the content and partisan balance of the national conversation about major public policy issues.

In the first large scale randomized media experiment ever conducted, King and former students Benjamin Schneer, now assistant professor at Florida State University, and Ariel White, now assistant professor at MIT, found that if just three outlets write about a particular major national policy topic - such as jobs, the environment, or immigration - discussion of that topic across social media rose by more than 62 percent, and the balance of opinion in the national conversation could be swayed several percentage points based on that coverage.

"For several hundred years, scholars have tried to measure the influence of the media. Most people think it is influential, but measuring this influence rigorously with randomized experiments has until now been impossible," King said. "Our findings suggest that the effect of the media is surprisingly large. Our study's implications suggest every journalist wields a major power and so has an important responsibility."

Those findings, King said, are the result of more than five years of work, much of which was spent convincing 48 news outlets to agree to take part in the study. Fortunately, about half of these outlets were represented by the Media Consortium, a network of independent news outlets, eager to find a way to measure impact and willing to help.

"Much of the work leading up to this study involved finding a way to bridge the cultural divide between journalism and science," King explained. "Through years of conversations, much trial and error, and a partnership with Media Consortium Executive Director Jo Ellen Green Kaiser, we learned to understand journalistic standards and practices, and the journalists learned to understand our scientific requirements. What ultimately made it all work was a novel research design we developed that satisfied both camps."

Though similar efforts have been tried in the past, they invariably collapsed as journalists chafed at the notion of being told what to report and when to report it.

To address the problem, previous researchers fell back on clever tricks, such as studying areas that fell outside the broadcast area of a particular outlet but, because no one knew whether the areas were truly random, they were hard to evaluate. Such studies faced many problems, particularly their inability to control for a host of factors like race, education, or income.

"From a scientific point of view, we have to be able to tell the journalists what to publish, and preferably at random times," King said. "From a journalistic point of view, these scientific requirements seem crazy, and journalists reasonably insist on retaining absolute control over what they publish. The two sets of requirements seem fundamentally incompatible, but we found a way to create a single research design that accomplished the goals of both groups."

It wasn't only the participation of news outlets that made the study noteworthy, though.

"If you're doing something like a medical experiment, you may randomly assign individual people to one of two groups, and then each person is your unit of analysis," he said. "But when a media outlet publishes something - no matter how small - the potential audience it could impact includes basically everybody in the country. That means our unit of analysis can't be a person; it has to be the entire country, which greatly increases the cost of the study." This means that the equivalent of an entire experiment in many other studies constitutes only one observation in this study.

Because collecting each observation was so expensive, and so logistically challenging, King and colleagues used, and further developed, novel statistical techniques to enable them to collect only as much data as needed. They could then examine, after each massive national experiment, whether they now had enough data to draw reliable conclusions.

"That allowed us to keep going until we got to the point where we had exactly as much data as we needed, and no more," King said. "As it turns out, we ran 35 national experiments that produced 70 observations."

To achieve the experimental randomization needed for the study, King's team, the Media Consortium staff, and journalists at the 48 outlets together identified 11 broad policy areas - like jobs, the environment, or immigration. They then simulated the tendency of journalists to influence each other and publish stories on similar topics, sometimes known as "pack journalism," by choosing three or four outlets from their participating group of 48 to develop stories together that fell into the same broad policy area.

"For example, if the policy area was jobs, one story might be about Uber drivers in the Philadelphia area," King said. "We would then identify a two-week period where we predicted there wouldn't be any surprises related to that topic area - so if the President was planning to give a speech about immigration in one of those two weeks, we would not run an experiment on immigration during that time."

Randomization came from researchers flipping a coin to determine which of those weeks would be the publication week, and which the control week.

"At first, our outlets didn't really understand what randomization meant," said Kaiser. "Our project manager Manolia Charlotin and the researchers worked very closely with all the outlets to ensure they followed the researchers' rules. This was a resource-intensive project for us, but the unexpected benefit was that outlets found they also gained many qualitative benefits from collaborating."

In both treatment and control weeks, King, Schneer, and White used tools and data from Harvard-based start-up Crimson Hexagon, to monitor the national conversation in social media posts. King is co-founder of Crimson Hexagon and, with a previous generation of now former Harvard students, developed the automated text technology that was originally licensed by Harvard to create the company. He explained that this methodology "is used to evaluate meaning in social media posts. So if you have a set of categories you care about, we identify example posts in these categories, which is what humans are good at," King said. "Then our algorithm can amplify that human intelligence and, without classifying individual posts, can accurately estimate the percent of posts in each category each day."

What they found, King said, is that the effect was larger than anyone anticipated.

"The actual effect is really big," King said. "If three outlets (with an average circulation of about 50,000) get together and write stories, the size of the national conversation in that policy area increases a lot - it's a 62 percent increase on the first day's volume distributed over the week, just from these three little outlets."

"These national conversations about major policy areas are essential to democracy," he added. "Today this conversation takes place, in part, in some of the 750 million publicly available social media posts written by people every day -- and all available for research. At one time, the national conversation was whatever was said in the public square, where people would get up on a soapbox, or when they expressed themselves in newspaper editorials or water-cooler debates. This is a lot of what democracy is about." King explained, "The fact that the media has such a large influence on the content of this national conversation is crucial for everything from the ideological balance of the nation's media outlets, to rise of fake news, to the ongoing responsibility of professional journalists."

###

This research was supported with funding from Voqal and Harvard's Institute for Quantitative Social Science.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.