(CME) Sifting for Gold: Finding the Best Research for Your Story

The Big Question: How can we help journalists to find the best research for their stories?

Background:

Consider the following two experiences:

● An academic article that forms the basis of a news story has a terrible reputation. The journal may or may not engage in peer review. It may or may not ask academics to pay to have their article included. Academics at leading institutions do not publish in it. None of this information is present in the article, however.

● A researcher on a topic is not contacted for an article, and instead others describing the researchers’ work are included. These examples showcase a potential problem for journalists: finding high quality sources while on deadline. How can we help journalists?

A bit about your organization/group:

The Center for Media Engagement at The University of Texas at Austin is dedicated to helping media organizations meet their business and democratic goals. We’re particularly interested in helping the public to find trusted sources of information, and in helping news media engage the public.

Additional resources to help you prepare:

An organization that connects journalists with researchers: https://www.sciline.org/

Another organization connecting people with the public: https://www.publicinsightnetwork.org/

What type of deliverables are you looking for:

● A creative and reliable solution to connect journalists with possible sources

● A way to signal the reliability and quality of academic research


(CredCo) Data Exploration and Discovery around News Credibility

The Big Question: What signals or indicators from news articles are more correlated with credibility?

Background: In the Credibility Coalition’s first work, we outlined a process for defining indicators of credibility and validating them through the collection of annotation data. From going through this process with a focused set of indicators and articles, we found several indicators that show reliability and correlation with domain expert scores of credibility, such as the presence of a clickbait title or the accurate representation of sources cited in the article.

A bit about your organization/group: Credibility Coalition is an interdisciplinary community committed to improving our information ecosystems and media literacy through transparent and collaborative exploration.

WorkBench is an integrated data journalism platform that makes it easy to assemble data scraping, cleaning, analysis and visualization tasks without any coding. You can load live data from the sources you already use, including open government portals, social media, SQL Databases, and Google Drive. Completed workflows can be shared along with the story to show audiences the story behind the chart.

Additional resources to help you prepare:

● Need to understand what a signal or indicator is in the first place? Try this: https://misinfocon.com/nutrition-labels-for-the-news-a-way-to-support-vibrant- public-discussion-20c64225637c?gi=90ef918cff69

● See some of our slides in this deck so you get an idea of the kinds of preliminary conclusions we made during the first study: https://docs.google.com/presentation/d/16_I2hjFCO5k6n1PONsSsZYqyqa- ndzmBt5ZVdQuEgjU/edit#slide=id.g38d3ddfe32_2_20

● Read our initial paper for the full explanation: https://credibilitycoalition.org/results/

● Link to datasets: provided later.

What type of deliverables are you looking for:

Using the new data, we’d love to see whether earlier conclusions from our first paper are confirmed or new insights are discovered. Perhaps we could have the beginnings of another paper!


(CredWeb) Mapping the Building Blocks of Credibility: a Draft Credibility Schema

The Big Question: Can we build a basic schema of credibility signals for both human use and computer interoperability?

Background: As concerns grow about the importance of establishing credible content, so do concerns about how this credibility is communicated on content traveling around the web. Simply rating an article as “credible” is not enough; we need to understand what parts of it are credible, how the conclusion about its credibility was reached, and how to communicate that credibility effectively. Defining a set of standards for content credibility gives us a more effective way to talk about it, and, importantly, to make important decisions about how we share and display that content, regardless of what site the content appears on. This project is intended to support an ecosystem of interoperable credibility tools, including those being built and tested at CredCon. These software tools, which may be components of familiar existing systems, will gather, process, and use relevant data to help people more accurately decide what information they can trust online and protect themselves from being misled. We expect that an open data-sharing architecture will facilitate efficient research and development, as well as an overall system which is more visibly trustworthy.

A bit about your organization/group:

Credibility Coalition is an interdisciplinary community committed to improving our information ecosystems and media literacy through transparent and collaborative exploration.

The mission of the W3C Credible Web Community Group is to help shift the Web toward more trustworthy content without increasing censorship or social division.

Additional resources to help you prepare:

● Need to understand what a signal or indicator is in the first place? Try this: https://misinfocon.com/nutrition-labels-for-the-news-a-way-to-support-vibrant- public-discussion-20c64225637c?gi=90ef918cff69

● Our draft report at the W3C: https://w3c.github.io/credweb/report/20181011

● Our draft signal framework at the W3C: http://credweb.org/signals

What type of deliverables are you looking for:

● We'll make a wall full of potential signals

● We'll have a working document with a first draft of signals by the end of Sunday, along with considerations for each signal


(Bitpress News): Misinformation Alerts

The Big Question:

How can we alert journalists to possible misinformation quickly - so they can help prevent it from spreading?

Background:

In the past, sharing misinformation between journalists has proven effective at combating it, as evidenced by the Active Measures Working Group that brought journalists together to combat Soviet propaganda and more recent task forces focused around elections.

Outside of these special projects, few channels exist for journalists to share breaking misinformation that has been identified. Compounding this, investigating and publishing fact-checks takes time, meaning that misinformation often propagates unaddressed for hours or days before it can be reported on.

We believe that this can be addressed in two ways:

1. Preemptively identifying news topics that may be targets for misinformation. 2. Quickly sharing fact-checks between journalists.

We propose an alert system that any journalist can subscribe to and access this data.

A bit about your organization/group:

Bitpress News uses technology to make human fact-checkers more efficient. Our data science platform algorithmically identifies misinformation signals not from within a single article, but from the network of other publishers that either confirm or contradict it. This allows analysts to concentrate fact-checking efforts on high risk content. We believe that this unique approach-using both human journalists and machine learning-is the only way to solve the misinformation problem at scale.

Additional resources to help you prepare:

Bitpress News Misinformation Alerts - https://bitpress.news/misinformation-alerts/

Bitpress News currently publishes fact-checks for claims that have been flagged by our data science platform and reviewed by a staff member. We are still ramping up efforts as we don’t have full time journalists on staff, but we produce ~5 Misinformation Alerts daily at this time. Often, we publish a Misinformation Alert when the rating is still ‘Uncertain’ to alert others that no evidence has yet emerged to prove or disprove a claim.

What type of deliverables are you looking for:

1. Product - The tooling to manage content, maintain a subscriber list, and

email/and or text content to subscribers. This should likely be hacked together from existing services. 2. Governance body - This alert system should aim to serve publishers and

journalists across the political spectrum, and will therefore need to have a diverse governance body that can set and uphold editorial standards. 3. Misinformation definition standards - Many different definitions for

misinformation have been proposed, but none have been accepted universally. This project will require an agreed upon definition. 4. Content format standards - To easily ingest content from different publishers for inclusion in alerts, a standard format needs to exist for a fact-check or misinformation warning.