Elan: notes the long history of problems this raises. Not easy!
Elan: deciding which judges to source is a matter of reputation; and you can read in that sth about the author.
Daniel: UX of AI systems. Can visuals of the system affect credibility?
Jen: Subject of trolling. Lots of political attacks. Anyone w/ public interest are attacked by pseudonyms. Investigating this makes you a target too.
Karen: UX on AI systems — crowdsourcing input on collective decisions. How to formalize?
Matt: Software for legal analysis. Not deeply tied to rep measures.
Neil: Interested in thinking about credibility problem in news sharing. Credibility of claims + people.
Joshua: can reputation be used to appoint community moderators?
Carl: sw dev, want a system to benefit from other people’s collections of claims (about [dis]reputability). I like to give back to those, which is why I’m at misinfocon. Different people will see different speakers as credible. Empower people to do this efficiently, maybe be exposed to different viewpoints as you do.
Adi: Storyful — reputation of organizations. Working on a credcon research group related to health (source + network rep)
Stefan: regulatory (private /self / &etc.) responses to misinformation
Sandro: Credibility data flows — assume crowdsourcing works without reputation? Need working reputation system on open internet. (TrustProject lets you self-identify, but may be vulnerable to fake profiles). Interested in how decentralized social systems can be resilient against misinfo attacks.
Chris Marciniak: Following misinfo campaigns. AI processing (content rep). Reputation models in subjective logic.
Aviv: attends to human reputation (behavioral)
Sarah Schwettmann: cog neuro, interested in reputation as a whole: avoiding overfitting. How we can use ML in addition to human labelling (for reputation across this spectrum)
SJ Klein: Look at reputation on wikis. They tend to use a combination of source, content, and network/behavior reputation.
Elan Pavlov: Look at assigning reputation based on who you communicate with / rebroadcast.
What metrics do you use regularly / in your work?
Source reputation
Active: List of reliable news sources, Yahoo! directory, Stack Overflow badges, Reputation Score (NewsGuard, &c.), Journal impact factor
Passive: Google TrustRank, newness on a site, whitelisted to friends
Content/behavior reputation
Active: eBay, Amazon, likes*, behavior —> incremental temp. silencing (chat)
Passive: Bounce rate (for a search ranking), unlikely behavior. “That Black Mirror episode”
Network reputation
Active: PGP, web of trust
Passive: Pagerank, HINTS
Family (Class/group) reputation
Active: Mafia membership
Passive: Prejudice / image-based
Other: Behavioral, composite
Active: “Good for families” travel rating, VRChat safety & trust system
Passive: Credit score, social score (China)
It would be nice to be able to choose what sources + clusters + behaviors you trust, and have credibility compiled based on that.
I don’t like the way we frame trolls as opposed to journalists. Some trolls really have been let down, feel untrusted. So journalism needs to regain trust; but the trolls are drawn into a mob, and it becomes a category-fight.
Weight given to different participants: those with most time or CPUs tend to win an argument, in many environments. But the environments are often built around the idea that every person has equal ‘membershi’ or ‘rights’ in the community.
Think about being believed and believable; every model encodes some notion of beliefs.
Another dimension here: Objective/universal v. subjective/contextual (Karen)
reputation/credibility systems are subjective shortcuts, which we’re trying to make universal. How do metric builders think about this?
when looking at a human, “which people I follow block them?”
NewsGuard restricts participation to those it approves; doesn’t scale. Expand by signing-circles? let people in the network approve others. Or LinkedIn co-employees to find comparable institutions
=======
What is the goal of developing reputation metrics?
What do we want (to have, to build)?
Can we clearly define the landscape and which part of the problem we’re addressing with each tactic. A few possible goals (add more!):
(i) to provide individual readers with info they need to gauge their confidence in a source and thus construct their picture of truth in the world
(ii) to give platforms a metric to curate usership/discussion
We should be clear about which tactics are useful for (i), (ii), & (…).
Should individuals be able to see reputation scores (potentially the output of some black box) and/or to see all of the info used to create that score (as well as whatever algorithm/metric)
Perhaps scores as a top-level metric are useful to a platform for making decisions about their userbase (the platform has internally agreed on the metric used to create the score).
Are such scores what we want to provide to individuals? Or do they need the “whole tale”? How gameable is a single score? How easy is reputation-washing?
What audience are reputation metrics for? a. platforms, do they give voice; b. readers, do they filter or respect others; c. tools, do they prioritize? gather and share full context? d. summarizers/reusers, do they check longer tail context
What purpose is the reputation for? Help a journalist quickly determine whether a source is worth quoting, leads to one system. Encourage pleasant discussion and debate, suggests another system. Avoiding error? Reducing the power and scope of malicious propaganda?
Ex: can platforms get enough out of simple content-agnostic tools? (Sandro: hopefully no. That’s dehumanizing.) Do readers/people need more than this? (yes, content matters a lot; great filters are a gift) Openness of data and procedure are a key preface so we can align these different approaches. Adding crisp white/blacklists as well as messy gray areas can vastly improve experience and flexibility in handling edge case
From James: Did you consider how reputation indices could be misused to either blacklist an unpopular source or how it could be used to force conformity (as it is done inside a great firewall of note)? [If this is in the wrong place, please move it, but please don’t kill the question.]
Requests:
An open reputation system. With active elements: let people positively state whether they want to affect someone’s reputation when talking about them or their work. With passive elements? Mayyyybe. Can be inferred from context, might not be.
Ways to express scope of appropriate reputation transfer | address reputation-washing. Active: be explicit about roles (in title, framing)
Capturing people’s willingness to bullshit — saying things they know to be false vs sarcasm vs believing what’s false. (cf. Poe’s law)
Public services that distinguish people from bots/botnets.
For details, see the Credibility indicator list
The main indicators with some public data: TrustProject (8 indicators measured), ClaimReview (dump and feed available).
A number of smaller research projects have extended analyses of small sets of articles: CredCo (50 articles), Jigsaw (planning for 300 articles), CLAIMS (1000 health news items).
A few platform datasets open for research:
Twitter (IRA trolls, Elections integrity hub), Reddit