Skip to main content

The Underlay: sharing indicators as a public knowledge graph

Published onNov 17, 2018
The Underlay: sharing indicators as a public knowledge graph

Many projects produce trust indicators for sources and documents. They differ in who they rely on to assign indicators, and what they are measuring. A given indicator is often the latest measurement assigned by a given group/process, as metrics are updated over time. These differences isolate each indicator in its own silo with its own API.

Furthermore, using indicators in isolation does not maximize their potential usefulness. Many reviews publish only the latest result of their metric, not the underlying data + function behind it. Indicators are gamed over time and may need adjustment. They may need to be combined with one another and with other context to serve different audiences. Readers, reviewers, and reusers may want to see how they were arrived at, assess truth on their own, or use their own function on the same raw observations.

We need a shared framework for describing indicators, and the full context of their measurement. And related services can be decoupled and shared across projects, such as identifying and clustering documents, authors & publishers; and discovery and query services for finding & sharing indicators.

The Underlay is a distributed graph of knowledge, including assertions of alignment and connection between existing databases. It captures claims and their provenance, rather than trying to capture truth.

Share your indicators & components

Combining indicators improves their strength and our understanding of a document or claim. Places to start:

Share granular data:

  • Publish algorithms and version of them

  • Name and share underlying sources + analytic functions

Share schemas and identifiers:

  • Name your indicator schemas (such as ClaimReview)

  • Use identifiers for entities (authors, organizations, documents)

Externalize and share services:

When classifying or clustering claims and sources, or identifying reviewers, define this as a service, and either specialize in providing it for others or use a shared service along with them.

Examples: (integrating w aggregate databases; multiple iterations of a metric (timestamped, metric-version-stampled) Extensions: Federating queries


Comments
0
comment
No comments here
Why not start the discussion?