The Media Consortium launched our Metrics Impact Project in 2012 with generous support from the Voqal Fund. The goal of the project is to learn if we can quantify the impact that progressive news stories have on audiences by measuring changes in sentiment. In the metrics world, “sentiment” means “how someone thinks about a topic.” In short, if you publish a story about charter schools, does that change how the public thinks about charter schools? The research is being carried out by Gary King, the Albert J. Weatherhead III University Professor and Director of the Institute for Quantitative Social Science at Harvard, and by his two incredibly capable graduate students, Ariel White and Benjamin Schneer. The project is guided by three core assumptions: 1) Change in sentiment is the right metric for measuring the impact of news. Specifically, the researchers are measuring changes in the sentiment of what they call “activated public opinion,” which are the views of people actively trying to change public policy or the views of others (vs. surveys, which measure the sentiment of the average American) 2) Measuring changes in sentiment on Twitter will closely match changes in the sentiment of “activated public opinion” more generally. 3) Editorial collaborations are more likely to produce changes in activated public opinion on a regular basis than individual stories by individual outlets. How the Project Works The way the project actually works is this:
  • Back in 2013, the researchers asked us to choose a few evergreen topic areas for this project. We chose: immigration, education, reproductive health, and recently added climate change.
  • The researchers have access to the full Twitter “firehose.” They looked back over the past several years of tweets on these four topic areas and established a baseline for different frames in which people were tweeting about them. Their analysis goes beyond keywords, using an algorithm that is modified by human beings in order to pull apart the nuances of positions like pro-charter schools or anti-charter schools.
  • TMC staff (Manolia and Jo Ellen) organize collaborations around the project topic areas. Each collaboration ideally includes at least two original pieces, and at least 3-5 outlets posting the pieces.
  • The researchers need to randomize the experiment to ensure that changes in sentiment come from your stories and not from world events. So we pick two possible publishing dates, and the researchers randomly choose one date.
  • Participants actively try to not publish on the same topic on the week not chosen.
  • The stories all publish on the designated date. Participants in the collaboration retweet each other’s stories, using a common hashtag (which lets researchers track the reach of the specific story). TMC also promotes.
  • The researchers measure sentiment on the topic, and compare the sentiment in the week after a story runs to the baseline measurement of sentiment to see if the collaboration had an effect.
What is the Benefit of this Project? The Metrics Impact Project has short-term and we hope, long-term benefits for TMC outlets.In the short-term, participants have told us they have realized two benefits: 1) Marketing. Publishing pieces collaboratively has helped outlets extend their reach to new audiences, both in terms of social media and in terms of readers. 2) Editorial. Journalists involved in collaborations have told us that they have gotten more and better story ideas; when journalists from different outlets have worked together on stories, they tell us it has provided them with professional development and strengthened their skill set as journalists. In the long-term, our aim is two-fold: 1) Provide quantitative proof to funders that news stories change sentiment. Being able to provide such proof has become increasingly important in the funding environment: for more, see this article by Ethan Zuckerman in the Stanford Social Innovation Review. 2) Develop a tool so that news outlets can continue to measure this change in sentiment. Where Are We Now? What follows is a brief chronology of the project and next steps. Year One, 2013. This year was spent mainly behind the scenes, gathering information and designing the experiment. Year Two, 2014 We began the project in earnest, focused on testing the theory that collaborations change sentiment. With funding from Voqal Fund, we ran 9 collaborations. Each of these collaborations involved 5 or more outlets, and required reporters and editors to coordinate with each other. Researchers found a strong correlation between these collaborations and sentiment changes, but they realized that they would need up to 40 collaborative instances to get statistically clear data. At the same time, participants told us that these collaborations took a lot of staff time. They also took a great deal of TMC staff time. Year Three, 2015. In the spring of this year, we went through another design phase in response to what we learned in 2014. As a result, we changed the nature of the collaborations so that they are now more like co-publishing instances. Instead of asking outlets to work together from scratch on a set of stories, we are now asking outlets to co-publish stories they planned to run anyway. We also hired Manolia Charlotin to direct the project. Since May 2015, we have published 18 of these collaborations. We are very excited that so far, the researchers are again seeing a correlation between each collaborative instance and sentiment change. In short, it looks like the theory works, and that we will be able to provide quantitative proof that your stories, at least when produced collaboratively, have a measurable impact. However, we cannot publish this data until we have enough instances to ensure that the initial perception of the data is correct. Our next step is to run 20 more copublishing instances in 2015. We expect to conclude the project by December 31, 2015. The researchers will present their preliminary findings at our annual conference in Philly in February 2016.   Jo Ellen Green Kaiser,