This document is a primer for developing and improving technological methods to help promote trust and accuracy, especially on the web and involving news reporting. While not always comprehensive, it attempts to guide people away from overly simplistic designs and reveal a wide array of potential solutions. It concludes by enumerating technical standards that will be needed to enable many of these methods on the open webImpatient readers may want to start with section 10 (Promising Technical Approaches) or section 11 (Potential New Web Standards).

This is an Internal Review Draft (an "Editors Draft"). Members of the Community Group, please use this form to express your view on whether this should published for Public Review.

This snapshot version is made from time to time, but is not necessarily more stable or authoritative than the Google Docs version of this document, where edits are made.


This document summarizes and expands discussions from mid-2018 within the W3C Credible Web Community Group. It contains ideas and other content contributed by group members, often in discussion with the wider community.

Comments are welcome and are especially useful if they offer specific improvements which can be incorporated into future versions. Please comment either by raising a github issue or making inline comments on the Google Docs version of this document. If neither of those options work for you, please email your comments to public-credibility-comments@w3.org (archive, subscribe).

Introduction

Can you tell, when looking at a random web page, whether you should trust it? When scanning a page of reviews or search results, do you know which matches come from legitimate sources and which are scams? When reading a news feed, can you tell which items ought to be believed and which are slanted or manipulative? Can you detect propaganda or outright lies? Perhaps most importantly, what happens when you inevitably guess wrong while making some of these credibility assessments, and you unknowingly share falsehoods with your community, helping to make them viral? What if you are misled into making bad decisions for yourself and the people you care about, with potentially disastrous consequences?

The Credible Web Community Group was formed at W3C, the organization which develops technical standards for the web, to look for technological approaches to this “credibility assessment” problem. It's not that we think technology can solve every problem, especially ones as deeply human and complex as this one, but it seems likely that some technology is making matters worse and that certain designs could probably serve people better. For some of us, creating better approaches to credibility assessment seems like a good way to help.

Scope

This report, like the Community Group which developed it, is focused on web-centric technical solutions which require standardization.

The web isn't everything, but it's arguably the knowledge backbone of the world today.  Everything can and usually does tie back to the web to some degree, so if the web is a reliable source of accurate information, that should help quite a bit.

With standardization, we can help people (via their computers) work together toward improving credibility assessment and the availability of trustworthy information.  As detailed below in Potential New Web Standards, the area we find most suitable for standardization is web data interchange, where websites publish data for other systems to consume. W3C standards already cover the basic mechanics for this, and they are widely implemented, most popularly with vocabularies defined by schema.org (a collaboration among search engine companies). What remains is to standardize vocabularies (also known as schemas or ontologies) for exchanging data which bears directly on credibility assessment, that is, “credibility data.”

There may also be a benefit in standardizing new browser functionality. For example, browsers might manage a collection of independent credibility assessment tools which work together to guide and inform the user. This kind of new feature can be pioneered in browser extensions and then later adopted into browsers to increase the user base.  

While this report is forward-looking, some of the technologies discussed are already deployed to some degree in existing systems, particular where they require no user action, like in prioritizing search results and news feed items. The hope here is to make such systems more accurate and effective by providing them with better data from around the web in a standard format.

First, Do No Harm: Hazards of Intervention

Changing the technology of the web to empower users to be more accurately informed, while likely to be difficultchallenging, sounds laudable and even overdue. But there are likely to be unintended consequences resulting from these changes. To use a medical analogy, treatments used to help address this painful condition are likely to have side effects. That doesn’t mean we can afford to do nothing, which itself presents significant risk, but we do need to proceed with caution.

This section lists types of side effects, with some ideas about how to mitigate the risks. A general approach to managing these is discussed later, as a recommendation, in Review Board Process.

In general, this should be understood within the context of W3C’s mission of a Web for All, a Web for Rich Interaction, and a Web of Trust. There may need to be trade-offs among benefits of the web; if so, let’s make them with our eyes open. 

Censorship

The regulation of content, called &quot#x201C;censorship&quot#x201D; in some contexts, is controversial. Nearly every country in the world considers publication of certain kinds of material (such as child pornography) immoral and illegal.  At the same time, many of the same countries express their belief in the value of free speech and a free press, sometimes in very broad language. In practice, there is rarely a sharp line between content that is prohibited and content that is accepted. Additionally, content that falls on one side of the line in one jurisdiction at one point in time might be on the other side of the line in a different jurisdiction or at a different time. This is a challenge because the web reaches across time and jurisdiction.

Historically, and perhaps largely by accident of design, the web has managed this controversy by making it relatively easy for anyone to publish what they want but also making it hard to run a website while remaining anonymous to law enforcement. Given the current technology, much of the world’s population could easily set up a website and host illegal content. But the same technology makes it fairly easy to track such a person down and, subject to applicable laws, take down the site and/or identify the site’s owner for criminal prosecution. There are technologies which allow sites to be more anonymous, but they are vastly more difficult to use, making them prohibitive for most situations. (This web technology compromise should not be confused with content regulation decisions made within a particular platform, which is a separate issue.)

While some people are uncomfortable with this technology-driven compromise, especially when considering content they would place on the other side of the line from where it currently lies, we suggest that extreme care be taken before sweeping this solution aside for something that might be worse.

Some proposed credibility technologies could upset the current balance. For example, one common suggestion is to allow publishers to be certified as entities which adhere to certain standards. Platforms would then be free to use this certification to provide better services. They might also be pressured to only allow certified content, perhaps through liability concerns. Approaches like this could significantly raise the bar on publication, fundamentally changing the balance in the system. It should not be done lightly.

It may make sense to raise the bar in certain limited ways.  For example, in professions and industries which are already regulated, like medicine, making existing certifications available to the computing infrastructure could increase the quality of information with relatively limited side- effects.  Even then, some “escape hatch” mechanism would likely be important, where people who fail to meet that test can still get word out (e.g., as a whistle-blower), perhaps via a trusted third-party or an opt-in system for bypassing the safety mechanism.

Centralization

Another possible area of side- effects is in shifting the web toward having more centralized control. For example, Elon Musk’s “Pravda” proposal to “create a site where the public can rate the core truth of any article & track the credibility score over time of each journalist, editor & publication” may (or may not) have been tongue-in-cheek, but it received massive coverage and support. Such a system would have strong network effects, likely resulting in a monopoly power in its niche if it caught on. As such, despite whatever value it might bring to its users in the short term, it would stifle competition and free innovation. It would also introduce critical issues with scaling, censorship, and political bias.

In contrast, the architecture of the web is largely decentralized, minimizing bottlenecks and central points of control.  With this in mind, and with the will to do so, something like “Pravda” could potentially be created as a decentralized system, using a variety of techniques including the W3C ActivityPub protocol. 

More modestly, and as a general rule, we suggest that whatever systems are implemented to help with credibility, users always retain control over their data and computing platforms. This aligns with our recommended approach of standardizing vocabularies for credibility data interchange, removing the technological barriers to moving data between systems.

We recognize that some degree of centralization, while it has risks, can have tremendous value and may turn out to be worthwhile. For example, the increased centralization of email has likely helped with reducing the problem of spam. Still, it is vital that independent email systems continue to exist and they all interoperate.

TribalismIngroups, Echo Chambers, Filter Bubbles

Some proposals are likely to increase the degree to which consumers see only the content they want to see or their community wants them to see, regardless of its accuracy. While many view this as a bad thing, experts disagree.

One theory on addressing this is to make clear to users when they are inside their bubble (seeing content algorithmicalgorithmically selected to fit them personally) and to allow them to easily step outside of it, whenever they want. The theory is that people won’t always choose to be sheltered, but will instead sometimes look for adventure, or something, “outside the wall.”   One early tool for “stepping outside” is  Blue Feed, Red Feed.

It has also been suggested that prediction markets incentivize accurate broad knowledge, leading people outsideout of their bubbles, and that commerce/trade in general motivates people to connect across different ingroups&#x201CxA0;tribesand helps&#x201DxA0; and help people get along regardless of “tribal affiliation”communicate productively and build trust across group identity barriers.

Privacy

Some credibility solutions may impact privacy. For example, a browser plug-in which checks every page the user visits using its proprietary cloud-based service could potentially misuse or leak information. If designed without proper safety measures, it could easily leak private URLs the user visits, and in some cases could even leak secret page contents, such as the user’s medical communications or their employer’s proprietary data. Such systems are strongly encouraged to use independent and public review to build in strong privacy protection, instead of any kind of “just trust us” approach.

Other Unintended consequences

It can be hard to predict all the ways a technological intervention can go wrong. For example, Facebook reported that adding warnings to newsfeed items indicating when an item had been debunked by fact-checkers resulted in more people reading and spreading false news. Predicting, detecting, and if necessary changing course away from such effects can be even harder in the standardized, multi-vendor, decentralized systems we otherwise prefer.

For more discussion on approaching and mitigating these risks, see Review Board Process.

Terminology

In this document we have attempted to use terms consistently and precisely, hoping to help promote wider consensus on terminology for the field. We are being both descriptive and a bit prescriptive here, although we recognize that in the future better terms may emerge and supplant these.

Roles in a Credibility Ecosystem

Roles in a Credibility Ecosystem

In general, the Webweb is an information ecosystem.  When thinking about credibility on the Webweb, it helps to be somewhat careful with our terms. Note that as information flows around the web, a single party may play multiple roles.

(Content) Consumer. Person who is receiving and experiencing some content.  Similar to: Audience Member, Reader, Viewer, Listener, Receiver, or User.

(Content) Provider or Source.  Person or organization who provided the consumer with some content. There may be a supply chain of providers, creating and assembling content before it reaches the consumer. Alternatively, sometimes a chain of friends act as provider, passing the content on to each other. Often this is invisible to the consumer, who perceives (and makes credibility assessments about) a single apparent provider. acts as a provider, passing the content on to each other. Often this is invisible to the consumer, who perceives (and makes credibility assessments about) a single apparent provider. Similar to: Producer, Creator, Author, or Publisher.  “Source” is often more ambiguous, but can be used as an alternative when “provider” gets awkward.

(Content) Promoter. Person or organization who intentionally or unintentionally increases the spread of content. In social media platforms, this can be as simple as “liking.” Commenting on content or linking to it, even to refute it, can increase its spread and visibility due to various algorithms.

(Credibility) Facilitator. Person or organization who is helping the consumer decide what to trust.  Similar to: Moderator, Fact-Checker, Forum, or Comments Editor, but also includes members of the crowd in crowd-source designs.

 

(Web-Based Communication) Platform. Technological system, and by extension the person or organization who maintains and controls it, which is providing the above parties with the infrastructure that enables the content to pass among them. Often controls which items are allowed and ranks items to influence which are seen. Includes content management systems (CMSes), web browsers, search engines, news feeds (as in social networks and microblogging services), and media sharing platforms.

Credibility and Trust

The terminology around credibility and trust varies somewhat across different fields of study and non-technical discussion. Here we propose some terms and an underlying model intended to be precise and formalizable enough for technical and scientific work, while natural enough to make sense in general use. In some cases, motivation, explanation, and advice is included as well.        

Claim.  An assertion that is open to disagreement; equivalently, a meaningful declarative sentence which is logically either true or false (to some degree); equivalently, a proposition in propositional logic.

Fact. A claim that is true; equivalently, a claim that accurately describes the actual state of the world.

Accuracy. Degree of truth, especially for claims involving measurements. Can be a useful concept to avoid pedantic distinctions around truth. The claim, “The radius of the earth is 6400 km,” is only somewhat true, as 6371 km is a more accurate figure. Of course, it is not exactly 6371 km, either. Similarly, most “true” statements are not perfectly and completely true, so considering their accuracy may be more helpful.

Believe (in a claim, as in: “Alice believes the world is round”). To be in the mental state of accepting (and consequently behaving as if) the claim is true, at least within some limited context.  Evidence suggests people can sometimes believe contradictory claims and switch among them depending on the context. Trusting a claim is essentially synonymous, although perhaps a bit stronger, like “firmly believing.” One approach to measuring strength of belief is asking how much someone would bet on the claim being true. PeopleAt the same time, however, people may not be consciously aware of how much they routinely bet their lives on their trust in the safety of products like cars and medicinespeople and engineered systems in return for minor gain.

The following definitions are expressed in terms of “some information.”  This termidea can be applied narrowly, to a small bit of text, or very broadly, to all the information provided over time by some content provider. For example, at small scale, we can consider the credibility of a particular sentence, or, at large scale, the credibility of a particular news organization in a given month. See additional discussion below in Granularity.

MisledMislead  (by some people, by some information).  To cause people to believe a false claim, due to consuming some information. It would require omniscience to know with certainty if someone has been misled;The mental state of having in reality we can only see how beliefs change and differ, and assess how useful they are, especially in making predictions. Nonetheless, the realization that one has been reasonably led by some information to believe one or more false claimsmisled can be quite clear, and we expect the desire to avoid being misled to motivate user adoption of credibility technologies. Becoming convinced of claims which are merely unprovenFor simplicity, but not known false, is not sufficient to qualify as being misled.  The mental process connecting the information to the false belief must bewe include asreasonablemisleadingall degrees of severity, including actions that might count as deception, lying, or fraud.

Nonmisleading (in the legal senseof some information, to some consumers), otherwise one could conclude one was. Does not mislead a significant portion of its consumers. Similar tomisledaccuratewhen one is simply mistaken. In contrast to being mistaken, if one is “misled,” at least some fault lies with the information, which we then call “misinformation. Deceived. Misled on purpose (much liked disinformation is intentional misinformation). Trustworthy (of information). Extremely unlikely to mislead. By labeling information “trustworthy,” one is saying the risk of any one consumer being misled is small enough that it should be ignored. Trustworthiness is the degree to which something is trustworthy. “Trustworthy” differs fromand “true” but includes information other than claims and excludes information which is so confusing that people misunderstand it. That is, “nonmisleading” differs from “true” in that poorly-expressed true information can still mislead, and things that are not true, like jokes or fiction, can still be trustworthynonmisleading, ifas long as they don't actually mislead anyone. Credibility (of information). The degree to which information appears trustworthytheir consumers. Credibility differs from trustworthiness when appearances are misleading

Disinformation (disinfo). People are tricked into false beliefs when falseInformation which intentionally misleads; information has high credibility (it appears trustworthy to them), and they resist believing true claims (actual facts) when they consider the information containing those claims to have low credibility. Reasonable people can disagree about the credibility of some information, but because credibility is based on appearanceswhich deceives its consumer.

Misinformation (misinfo). Either: (1) misleading information; or (2) unintentionally misleading information. Sense 2 was promoted by the influential Fake News, It's Complicated&#x2014xA0; objectively observable featuresand CoE Report, which defined the set of misinfo to be disjoint from the set of disinfo, instead of the superset implied by sense 1. Because of this ambiguity, we avoid using the term in this document, preferring either &#x2014x201C; it may be easier to collaborate and reach consensus aboutmisleading information” (for sense 1) or “unintentionally misleading information” (for sense 2).

Malinformation (malinfo). Information which is harmful without being misleading, such as leaked private information or upsetting images. This is out of scope for this report.

Believable (of information, for a given set of consumers). Appearing acceptable to believe, such as because it does not contradict any currently held beliefs; appearing nonmisleading. (Without omniscience, one cannot tell with certainty that claims are nonmisleading; rather one typically compares them to one's current beliefs.)

Credible (of information, for a given set of consumers). Believable, but with a connotation of additional confidence and motivation. Saying a statement is “believable” is saying it's possible, even reasonable, to believe it; saying it's “credible” is saying it should be believed, at least tentatively.

Credibility (of information, for a given set of consumers). Degree to which information is credible; degree to which information appears nonmisleading and useful (for the given audience). People are typically misled when falsehoods have high credibility than about more hidden features like trustworthiness(appear true), and they problematically resist believing facts which have low credibility (appear untrue). The term credibility can also be used broadly to refer to the problem space around trust, as in “credibility researchers” or “credibility software.”

Trustworthy and Trustworthiness can be synonyms of credible and credibility, respectively, but there seems less consensus on their meaning. For example, a counterfeit that fools everyone is misleading, and if it fools everyone we know it appears nonmisleading, so it is “credible.” People seem to disagree about whether such a counterfeit would be considered “trustworthy.”

Credibility Score (of some information, in some information distribution environment). A formalization ofnumeric value for credibility expressed as, on some defined scale. At present, there are no standard scales, so these scores cannot be meaningfully compared. For example, a standard scale might be: the statistical  probability that the given information will not measurably mislead memberseach member of the population who consumeconsumes it in the given environment. More work is needed to understand how credibility scores might be meaningfully interchanged and compared, and thus what kind of standard scale(This is a straw proposal for a definition, aiming to allow interoperability of credibility informations) might be useful. Research is required to refine and validate this definition. Consider citing a specific version of this report when using this term, if you mean it in this precise way.)

Someone claiming that information is credible or has high credibility is asserting that the information appears sufficiently unlikely to mislead that they think it should be believed.  Similarly, saying information is not credible or that it has low credibility is saying that it should not be believed because it appears too likely to mislead, in the speaker’s estimation.

We distinguish two kinds of credibility assessment acts:

Credibility assessment tools are software features or applications which perform credibility assessment or help people do so. That is, they are tools whichto help people make better credibility analysis to help people make better credibility decisions.

Inputs to

Credibility Assessment Credibilitysignal. A small unit of information used in making a credibility assessment. This can be done using a variety of strategies (discussed throughout much of this document)a measurable feature of the information being assessed for credibility, based on observable features of the content item being assessed, as well asor information aroundabout it (metadata), andor information about entities which relate to it in various ways, such as the entity who provided it.

All of this information feeding into a credibility assessment process can be broken into individual features, which are commonly called credibility signalsCredibility indicator. when being used by automated systemsCommonly, same as credibility signal. In contrastsome communities, credibility indicators are more commonly understood as features which humans use in assessing credibility.  In particular, machines may process the signals to produce additional indicators (such as icons or color coding) to help users rapidly decide what to trust. This signal-versus-indicator distinction is currently followed in some communities but not others.  To be more clear, for signals consider using phrases like signal data, signals-for-machines, or credibility analysis signals; for indicators consider phrases like persuasive indicators, visual indicators (if they happen to be visual), or indicators-for-people. (We note that this distinction has been emerging in the greater community, but does not appear naturally in English, similar to the way the automotive termsturn signal” is used for inputs to credibility assessment algorithms and “turn indicator” are each in widespreadis used for the display features added to the output, to communicate results to a human consumer. This distinction is not used reliably, however, so use in different regions.)of terms like &#xA0x201C;input” and “output” is suggested when one needs to make the distinction. See CredWeb Issue 3.

Assessment Strategies: How Can People Tell What To Believe?

Techniques for assessing credibility can be grouped into the following four types of strategies. Each of these strategies could be applied in a range of specific ways, and some tools will likely combine elements of several strategies.

  1. Inspection. Look closely at the content (and the page where it appears) for the presence of features which are statistically associated with low or high credibility. The expectation is that most of these featuressignals will be reliably detectable using software soon. Inspection may be the primary strategy consumers use unconsciously while reading. The hope is that assessment accuracy can be significantly improved with more refined software and with additional user studies.
  2. Corroboration. Identify salient claims made (or implied, or relied upon) in the content and check other sources which offer an assessment of the claim (e.g., fact-checks), related claims, or evidence helping the consumer accurately assess the claim. This is perhaps the technique most people fall back to when they become suspicious, but it can be prohibitively time-consuming to do regularly or do thoroughly.  It is also unable to help when fact-check and evidence providers are also suspect.
  3. Reputation. Assess the credibility of a content provider by gathering statements about them from other providers, typically ones with known credibility.  This can be done recursively, forming a reputation network from a few known and trusted “root” providers to help assess many unknown and suspect ones. This can also help establish the credibility of content, content providers, and corroboration sources. It can be done using a combination of institutional data sources (e.g., and can be done using a combination of institutional data sources (eg  certification authorities) and informal/personal data sources, eg e.g., the user’s social group of friends, contacts, and influencers.
  4. Transparency. Consider what the provider says about themselves and their content.  For instance, content may be labeled as opinion or satire, and providers may label themselves as partisan or as having processes and controls the consumer views as untrustworthy.  By itself, this data is easy to fake, but coupled with corroboration, reputation, and off-line social and legal incentives, it may be quite effective. Transparency includes providing meta-information which reduces the risk consumers will be misled, such as statements specifying the intended audience or disclaiming certain reliability practices or liability. To some degree, transparency allows providers to set the standards by which they are to be judged, simplifying the credibility assessment task.

Granularity: Which Entities Are You Assessing for Credibility?

People necessarily make credibility assessments about content items and about the people who provided the content. It seems likely they make these assessments across a wide range of granularities, all of which need to be taken into account in understanding and assisting with credibility.

People and Organizations (Content Providers)

Content providers can each be assessed for credibility; that is, one can attempt to determine the probabilitylikelihood the information they provide is trustworthynonmisleading.  This assessment might be done across a wide range from the general (coarse-grained, wide scope) to the very specific (fine-grained, narrow scope):

Content and Claims

Reasoning about the credibility of content similarly requires working with a range of granularities. For example, we might consider the credibility of a complete snapshot of Wikipedia, of a particular Wikipedia page, of the text of a paragraph, or of a specific claim.

Content which does not make explicit assertions, including jokes and photographs, can also be seen as implying a variety of claims, although that implication relationship may itself be subject to uncertainty.  

Aggregation

Sites which aggregate content from other providers, including social media sites, will need special consideration, as will sites which analyze or review other content, sometimes including false content as an example or evidence. That is, a site should be able to present misleading information for analysis without risking its own credibility due to tools simply seeing it hosting misleading information.

Complications due to the Web

In general, in designing systems which inspect and annotate web content, it often works to initially assume each interesting chunk of content will be on its own web pagespage. Then, one can expand the granularity up to web origins (loosely, domain names) and shrink it down to page fragments (loosely, identified portions of a page or portions of media shown on the page).  Note that the Web Annotation framework specifies a way to make essentially any portion of any version of a page be its own addressable fragment, although that is not yet widely implemented.

Sites which aggregate content from other providers, including social media sites, will need special consideration, as will sites which analyze or review other content, sometimes including false content as an example or evidence. That is, a site should be able to present misleading information for analysis without risking its own credibility due to tools simply seeing it hosting misleading information.

Credibility ratings for a web page may at first seem to be just about whether the content is trustworthynonmisleading, but in practice websites can change their content or even secretly show different content to different observers, so the trustworthiness of the site maintainer can be a factor as well.  To sidestep this added credibility complexity, protocols for securely referencing site content versions and noticing changes are likely to be necessary.  Fortunately, secure hashes and third-party archives, already standardized and implemented in memento, seem well-suited to help. In addition to securing against some attack vectors like this, such a secure history mechanism can provide useful historical evidence inwhile assessing credibility.

Threat Models: How Might Attacks Impact You?

In avoiding harm, such as the harmincluding all the different kinds of done by untrustworthymisleading information, it can help to understand the people and systems which might do harminvolved.  This section is a brief survey of some of the aspects of misinformation and disinformationmisleading information on the web. Particular projects shouldPeople developing credibility tools are strongly encouraged to dig much deeper inmore deeply, within their areas of focus, to reduce thetheir chances of building the wrong defenses.

Attacker Motivations

Providers of untrustworthymisleading information have many motivations, and often their motivations will never be known. Still, it can help to consider some of these types of attackers and their motives: 

  1. Ethical business, wants a long term business relationship.  This isTheir legitimate advertising which turns into an attack when it goes too far, perhaps accidentally misjudging the consumer’s knowledge, or reasoning ability, or resources.
  2. Unethical or illegal business, wants money with little regard for consequences. Can range from selling useless or dangerous products to starting rumours which impact stock prices.
  3. NonPerson or group seeking political influence or power (non-ideological political operative, wants legitimately-obtained authority (“tribal” powermight be ethical or unethical)
  4. IdeologicalPerson or group pushing for specific changes in society by political operativemeans (activistideological, might be ethical), wants to change society in a specific way.
  5. Disruptor, wants to increase fear, uncertainty, doubt, and distrust in particular products or institutions, or more broadly amongin a population
  6. Prankster, wants amusement or personal attention.
  7. Person seeking entertainment or satisfaction at the expense of others, e.g., by provoking an argument.
  8. Person under the influence of mental illness or mind-altering substances (e.g., alcohol), displaying erratic, irrational behavior.

There are, of course, many other kinds of difficult people online.

For a survey on types of bad information and some of the motives behind them, see Fake News, It's Complicated.

Attacker Resource Levels

How much effort is an attacker willing and able to devote to a disinformation attack? Here are some options, roughly in order of increasing threat.  The higher resource levels are often not worth designing against and may not even be ethical or legal to design against.

  1. Well-intentioned and careful person (but accidents still happen)
  2. Well-intentioned but careless person (accidents are more likely)
  3. Person willing to be annoying, to cause non-substantive harm
  4. Unethical but legal (perhaps by finding a loophole in the laws)
  5. Non-violent criminal
  6. Violent criminal
  7. RogueSingle rogue (unauthorized) government agent (in non-rogue agency)
  8. Motivated/coordinated online crowd (righteous mob)
  9. Violent criminal organization
  10. Rogue (unauthorized) government agency
  11. Foreign government (authorized by its own laws)
  12. Your local government (authorized by law)
  13. Your national government (authorized by law)

Intended Impact

  1. Voting
  2. Viewing advertisements
  3. Social media behavior
  1. Engage
  2. Reply
  3. Share
  4. Like
  5. Report
  1. Offline social behavior
  1. Spreading rumors
  2. Attending rallies
  3. Individual harassment, vandalism, violence (see stochastic terrorism)
  4. Mob violence
  1. Consumer choices, what to buy
  2. Donation choices, where to give money
  3. Lifestyle
  4. Shape the narrative, influencing how future stories are interpreted
  5. Weakening trust in institutions

Mental Attack Vectors

  1. Emotional Hijacking - Humans are great at making rational decisions, as long as they don’t care about the outcome
  2. Setting the initial frame
  3. TribalismIngroup set against outgroup
  4. Fear, Uncertainty, and Doubt
  5. Misleading with Statisticsstatistics - e.g., people are terrible at assessing the odds of unlikely events
  6. Misleading with fallacious arguments (e.g., ad hominem attacks, false equivalence)
  7. Judging a book by its cover - a good-looking website can be extremely convincing
  8. Confusingly similar account names, domain names, or brands; sometimes combined with copied style and content, to produce a counterfeit site
  9. Astroturfing (hiding the sponsors or members of an organization to appear credible)

Technological Attack Vectors

There are numerous ways to scam people online and a vast array of weaknesses in computer security systems which can help. A few are particularly relevant in credibility assessment:

  1. Sock puppets - These are online accounts with deceptive identity features, such as a foreign agent who masquerades as a concerned citizen or member of relatively trusted identity group, to increase their credibility
  2. Bot armies - byBy having large numbers of automatically managed accounts (eg e.g., fake followers) one can sometimes make a message appear popular or unpopular, as desired, and trick algorithms and people about what “everybody” thinks.  A variation is to create many websites, which can seem like many independent voices saying the same thing, which can be falsely highly credible. (SEO link farms are an instance if this.)
  3. Copied Websites - People often recognize a website and mentally track its reputation based on how the site looks, not it's actually domain name. Copying a site’s appearance and content, then perhaps changing a few key details, is relatively easy.
  4. Email impersonation - becauseBecause email doesn't authenticate senders, consumers can potentially be tricked with borrowed reputation and misinformed, perhaps via a copied (and altered) site. This is a variant on phishing and spear phishing where harm can be done without involving user credentials (where most anti-phishing defenses are arranged).

Stakeholders: Who Might Care Enough To Do Something?

Also see landscape spreadsheet.

Related Projects

[ Ideally for each entry we should have some description of how it relates and any active liaison. ]

Events

Technical Standards (Enabling Computer System Interop)

Process Standards (Standards of Organizational Behavior)

Scientific Research

There are dozens if not hundreds of research groups studying aspects of credibility around the world. Listed here are groups that have expressed interest in helping with technical standards work around credibility. Please add your group if this applies.

Grant-Making

Product Development

  • Hypothes.is  Web annotations
  • Meedan News verification tools
  • AboutThem.info  plans to facilitate applications based on me/us/them statements published on the Web in Strategy Markup Language (StratML) format.

Industries

Specific vendors/products are named as an example when an employee or representative has joined the group.

GettingAttaining Adoption

Many of the proposed solutions have strong network effects. They offer users minimal value until they have a critical mass of users. This makes it quite challenging to achieve adoption.

This kind of challenge is usually addressed by achieving critical mass in a small community and then expanding outward. (Facebook famously started at Harvard and then gradually expanded to other universities before eventually opening to the public.) Within that smallerthe target small community, adoption can be promoted or even required by the people managing the community.

Some specific approaches:

In general, funding is an issue. There are not yet any clear business models for credibility systems. Philanthropic funding may be necessary, at least for now.  

User Experience (UX)

The user experience (UX) of credibility assessment tools is likely to vary widely, given the range in granularity, subject matter, and computing environments where credibility matters and where software can help. Some of the ways users might experience credibility assessment tools:

We can organize the discussion of possible tool designs using a spectrum of user effort, from no effort at all, where the user benefits without doing anything, up to major effort, where a user might be spending hours or even days trying to determine whether to trust a single claim. The appropriate tools are quite different at different points along this spectrum. We will use this concept to organize the tool discussions below, but here we consider it in three ranges.  

Passively Guiding the User (Low Effort)

Some of these could impinge on the user's autonomy or confuse the user, but done properly it may be possible to “nudge” the user while fully respecting their wishes. Additionally, these concerns can be managed by allowing users to opt-in and configure settings for these features.

Within this level, we could zoom in, expanding these approaches that require no action from the user, perhaps conceptualizing effort-to-ignore as “intrusiveness”. It is unclear whether to consider blocking/demoting content as highly intrusive (it would take a lot of effort to get around) or completely non-intrusive (since the user won’t usually notice it).

Alerting User

With some of these options there is a risk triggering a backfire effect, as users may cling to false beliefs more tightly when corrected, especially if the correction is not highly credible to that user.

Initiated by User

These only help when the user consciously wonders whether to trust the content.

Most of these are perhaps best done at the claim level, where the user indicates a specific questionable claim in the content, rather than a whole page or site.

Promising Technical Solutions:Approaches: What Can Be Done?

This section enumerates proposals for technological approaches to reduce the risk of being misled.  Items are organized primarily by credibility assessment strategies. Our hope is to promote the development and adoption of effective tools, as well as  interoperation between related tools where that would benefit users. In time, this list might evolve to include links to relevant research and available products. In the meantime, we hope it will inspire and guide students, startups, hackathons, incubators, social investment funds, and other people who want to help.

It should be noted that if credibility tools start to become effective at scale, or look like they might do so, they will themselves come under attack from parties that benefit from higher levels of distrust and deception.

This initial summary table lists some of the more promising ideas, described as they might be implemented in a traditional web browser environment. The table is arranged so that features which require more effort from the user are farther down the table.  This is per item effort; install and setup effort is not shown.

Effort Level

Per-Item User Action

Inspection

Corroboration

Reputation

Transparency

0

None

Credibility score used in feed/search/suggestion ranking

1

Glance at visible indicator

Score visible on/near item. Highlight when unusually hihigh/lo low. 

Per-claim or whole-article fact-checks give indicators (click for details).

Display verified and recognizable faces and logos. Show 3rdthird party +/- flags on source.

Score displayed. Key signals (e.g., name, market, item type) displayed.

2

Confirm indicator

Very low score results in warning which must be confirmed before proceeding. E.g., pop-up, interstitial page, or confirmation before sharing.

3

Request additional information, presented immediately

Show scores per content feature and/or per source.

Show all available fact-checks, helpfully arranged. Lead to full content of each fact-check.

Expanded displays with more related info, info from more sources.  View social graph.

Show all score elements; leads to showing all disclosures.

4

Request additional information, presented later, e.g., after payment

Request (additional) human annotation be done/revealed.

Request (additional) fact-checks be done/revealed.

Request more information from graph, including from friends and certification businesses.

Request the site reveal more about itself.

5

Minor tasks (seconds)

Walk user thru through annotating a few high-value signals.

Identify key claims to assist in search.

Expand user’s trust graph.

Apply other strategies (e.g., corroboration) to verify disclosures.

6

Intermediate tasks (minutes)

Walk thru through all standard signals.

Search and link to evidence; see other uses and arguments surrounding that evidence.

Provide source credibility evaluations to clarify thinking, and as member of gift economy.

Social outreach to site asking for more disclosure

7

Major tasks (hours+)

Help discover and validate new signals.

Do a full fact-check, possibly in adversarial process.

Push others to provide credibility evaluations.

Provide 3rdthird party versions of the data, to assist and corroborate.

Inspection

  1. Tools can help humans inspect content for credibility signals. This might take the form of a checklist/questionnaire, combined with an annotation tool, where the user selects the portions of the content which have certain features.  
  2. That feature annotation data can then be shared, subject to privacy considerations, for multiple purposes.
  3. Data from multiple annotators can be compared against each other and against standardized test data to assess how well the measurement process is working, in terms of both precision and accuracy, and to reduce the effects of bad faith annotators.
  4. Annotations on test content can be used to validate the connection between signals and the truthaccuracy. Note the connection might be a positive correlation (signal indicates high credibility), negative correlation (signal indicates low credibility), or there might be a multifactor connection found by machine learning techniques.
  5. This data can be used directly to signal credibility levels to other consumers.
  6. It can also be used to train machine models which can then potentially also do the job listed in step 1.

People in step 1 might be motivated by:

Machines programmed/trained to do step 1 will probably be faster and cheaper, and might soon become more accurate and trustworthy. They will, however, likely have unpredictable failure modes (bugs, unexpected training artifacts, etc.) which might pose a security vulnerability to be considered.

If such automated systems are widely deployed, it is likely attackers will try to game the system, producing misleading content engineered to register as highly credible to the algorithms. Defending against this is similar to defending against malicious SEO.  Techniques include:

  1. Keep secret the algorithm for combining the feature scores into a final rank (or the weights if it’s a simple linear combination), and let this vary from system to system and change over time.
  2. Give a negative reputation to any providers caught attempting to game the system. (This can itself be problematic, especially if there’s no clear line between legitimately trying to appear credible and doing so maliciously. If providers might be punished for imperfectly adopting the technology, adoption is likely to be seriously impaired.)
  3. Give additional weight to features which are disproportionately useful for attackers. For example calls to viral action and emotionally manipulative language are cheap and effective. Counting them as negative features, even if they are not indicative of misleading content, might reduce the profits of attackers at a greater rate than those of legitimate providers.
  4. Similarly, on the positive side, one could give additional weight to features which legitimate providers can include more cheaply than attackers, such as use of correct grammar and journalistic language.

All of these points have an “arms-race” quality, and the later twoC and D might quickly devolve into competing AI, with adversarial neural networks being used to help create language which looks perfect to the credibility assessment systems.  As such, it seems unlikely inspection alone is a viable long-term strategy.

Corroboration

See issues and discussion at https://github.com/w3c/cred-claims.

  1. Allow professional fact-checks to be published in machine-readable form so they can more easily be matched, used in automated assessments, and shown to users when appropriate. (AlreadyThis is being done to some extent using ClaimReview.)
  2. Allow claim-extraction processes to publish their output for broad consumption. This would be a feed of claims found in the media and judged to be of relatively high value for checking.
  3. Allow users to express their desire for specific claims in some content to be checked. Might include an offer to pay for results which are good enough by some metric.
  4. Allow relationships between claims to be expressed, such as rephrasings to be more precise and context-free versus more terse and context-sensitive; more in agreement versus disagreement with the claim itself; making a broader versus narrower claim, between natural languages, and opposites.
  5. Allow the structure of a fact-check to be exposed as machine-readable, showing argumentation and secure links to the evidence.
  6. Allow end-users to express what they feel or know relevant to a claim’s accuracy in a way that might be aggregated into an accurate credibility assessment. It's important this not be simply popularity polling; importantvital but revolutionary ideas, when they startfirst expressed, are usuallymight not be believed by exactly one persontheir audience.
  7. Make it easy to view reviews of related claims from many sources at once, including some data about the sources, such as assessed bias.

Some of these rely on emerging technologies:

Zoom out to many items, across time, and you get a sort of automated reputation, this provider’s fact-check history.  This can lead into the public reputation, which is vital to the business model of some providers.

Reputation

  1. Make the identity of the content provider, including any provenance chain of providers behind them, visible to the consumer, especially in ways which align with user’s natural skills at recognizing faces and logos. In particularly, systems could try to avoid presenting confusingly similar brands without clear warning, and indicating how familiar their data indicates the brand is to the user.
  2. Software working on your behalf can try to ascertain the reputation of each provider you encounter (eg each website you visit) based on what has been said about them by entities you select as trustworthye.g., and by intermediate parties in a social graph. Users could be prompted, or at their own initiative enter their assessment of the reputation of particular sites they visit often, for their contacts unfamiliar with the site to use.  each website you visit) based on what has been said about them by entities you select as trustworthy, and by intermediate parties in a social graph.
  3. Users could be prompted, or at their own initiative enter their assessment of the reputation of particular sites they visit often, for their contacts unfamiliar with the site to use. This information could also come from institutions willing to make statements about their peers or clients.
  4. Reputation might be a complex, involving time and subject matter (see Granularity above) or, or it might be a simple one-bit-flag indicating whether the speaker considers the subject to always operate in good faith and with respect for accuracy.
  5. Make trust seals and other positive reputation signals from institutions secure. Some trust seals (like the one from BBB) are more than just an embedding image that anyone can use, but many are not. Even the more secure ones can be abused to mislead users who do not check them. In contrast, an out-of-band secure solution, for example done by the browser, could be much more secure.
  6. Negative reputation, such aas demands for retraction, could be shared and displayed when you visit a site, if it comes from sources you trust. With the right software, as a lie travels halfway around the world, it might be firmly tied to its correction, or at least to a strong warning about its low credibility.

Transparency

Here, the emphasis shifts toward the provider making an effort to be more visibly trustworthy. Transparency requires the provider reveal information about themselves and their content, which can require considerable effort and often brings significant risk, in the hope of being more credible.

  1. Label types of information, particularly information which is not intended to be accurate but could be mistaken for fact (such as parody, fiction, and opinion which is not phrased as an opinion).  This is partially done in in schema.org's article types.   These types could be conveyed to users by the platforms.
  2. Providers can disclose ownership, funding, and control information about themselves, which can be connected with reputation network information.
  3. Software can simply highlight which sites have made disclosures.
  4. Software can aggregate and analyze machine-readable disclosures.  This could benefit considerably from standardized optional disclosure clauses, in the style of Creative Commons.
  5. Software can highlight aspects of the transparency disclosures which might be especially relevant to you, based on your values or stated trust concerns
  6. Software can particularly highlight supportive and refutive claims from known 3rdthird parties about self-descriptive claims. (Combines with reputation.)

Many specific items about which transparency would be useful are discussed as part of the The Trust Project.

Much of the work here is about how to motivate transparency and avoid it being gamed. Misleading disclosures are a sort of double-or-nothing gamble in trying to mislead consumers.

Potential New Web Standards

Review Board Process

As detailed above, in First, Do No Harm, it is quite possible for credibility standards work to do significant unintended damage. One technique which might help mitigate this risk would be to adhere to an independent (orand potentially adversarial) review process.  These review processes are often seen:

These boards can be expensive to operate and their process can be slow and unpleasant. They can, however, provide an antidote to the tendency of people to be remarkably blind to flaws in proposals they support. This antidote works as long as one or more members of the board is careful to remain disinterested. When set up carefully, with board members who have high credibility in various communities, the board itself and the products approved by the board can also have increased credibility.

Alternatively, an adversarial process (as seen in courts) could be set up where some people are tasked with finding flaws and arguing against a proposal. A third option is to simply rely on wide public review (as per normal W3C process) and hope someone will comment strongly on any major issues.

A review board like this, capable of overseeing potential interventions around credibility, might be useful beyond W3C. It could also be useful for vendor-specific interventions, where a vendor seeks advice about the possible impact of a proposed change. In any case, a board like this could bring considerable credibility to the process if staffed with people respected by diverse stakeholders.

Organizations to consider cooperating with in setting up a board like this include the Internet Governance Forum, This is possible as long as one or more members of the board is careful to remain disinterested. When set up carefully, with board members who have high credibility in various communities, the board itself and the product approved by the board can also have high credibility. Alternatively, an adversarial process (as seen in courts) could be set up where some people are tasked with finding flaws and arguing against a proposal. A third option is to simply rely on wide public review (as per normal W3C process) and hope someone will comment strongly on any major issues. A review board like this, capable of overseeing potential interventions around credibility, might be useful beyond W3C.  It could also be useful for vendor-specific interventions, where a vendor seeks advice about possible impact of a proposed change.  In any case, a board like this could bring considerable credibility to the process if staffed with people respected by diverse stakeholders. Organizations to consider cooperating with in setting up a board like this include the Internet Governance Forumthe Global Network Initiative and the Global Network Initiative, and the Association for the Accreditation of Human Research Protection Programs.

It may also be possible to set up experimental situations, to validatetest proposals while making alimiting negative impact. This might be seen as A/B limited impact. testing This might be seen as A/B on a large scale. Civil Servant has developed some methodologies which may be helpful.

Vocabulary StandardsData Vocabularies (Schemas)

Each of the potential standards below is intended to allow systems to interoperate by sharing data with common syntax and semantics.  The expectation is that data sharing will be done in a manner compatible with that of schema.org using W3C standard data technologies like JSON-LD. These build on the underlying general graph model of RDF and support Web Annotations. These build on Web Architecture and the underlying general graph model of RDF, with related technologies like the SPARQL query language, but and can usually be used as ordinary JSON embedded in web pages. In each case, the purpose is to allow various independent systems to provide data feeds which can be easily and unambiguously understood by other systems across the web.

It remains unclear how much of the formal W3C standards process is necessary or even helpful for establishing practical, working interoperability for this kind of data. It may be that flexible, responsive de facto standard schemas can emerge from an organic incubation-style community process. In any case, people and organizations are encouraged to experiment with using existing standards and new vocabularies as necessary to achieve these goals and push for widespread adoption. Please do share ideas, progress, and results with the Community Group.

Inspection

A standard for expressing features of web pages and their content which signal credibility and which are relatively expensive to fake. When humans are recognizing the features, this would allow their work to be shared.  When software is recognizing the features, it would allow a more standard API between modules.  Standardization would also allow people to be trained to work with these features in creating content and in mentally assessing credibility.

Corroboration

A standard for:

  • Fact-checkers to publish their work in a way which can be easily aggregated (largely already done with ClaimReview). See https://github.com/w3c/cred-claims. 
  • Representing the argumentation structure of a fact-check, making it easier to re-check.
  • People and machines to indicate which claims they think should be fact-checked.
  • Expressing sightings of a claim and connecting them with existing fact checks
  • Expressing retractions (self-claim-check).

Reputation

Standards for reputation of providers:

  • Expressing a list of approved members of an organization (ege.g., IFCN, W3C, or AP).
  • Expressing a list of employees, students, or other vetted individuals.
  • Expressing that an organization, individual, or some content has been vetted and approved for some particular quality/credential/certificate.
  • Expressing the identity of the people behind a site .
  • Expressing the partisan leanings of a provider, or other bias.
  • Expressing the belief that a provider acts in good faith in the service of accurate content, or fails to do so.
  • … other reputational information.

Standards for reputation of content:

  • Expressing engagement metrics, allowing third parties to see indicators of virality. 

Transparency

Some work has already been done in this area with by the Trust Project and working with schema.org. See their github It is unclear what else might be valuableand Markup for News.

More work is beginning within the Journalism Trust Initiative (JTI), with leadership on framing ethical issues from Ethical Journalism Network, especially with their Ethical Media Audit.

Directions for next steps on transparency, and impact of existing work, are not yet clear.

Other

Define a way for one provider to embed content from another without endorsement. This would be for credibility a bit like nofollow was for SEO.  An iframe would be the natural web way to do this, but third -party content on social media doesn’t currently work that way. There is also the question whether the visited site can be entirely blameless for harmful third -party content; perhaps it can embed it with indicators about what editorial/review process it followed, if any.

Define a way for one provider to link to or embed content from another from another as an anti-endorsement. This could interpreted like a “downvote” by a pagerank style algorithm (but would not need to be coupled with a fact-check). This could be used to let high ranked sites not only increase the credibility of other sites, but also decrease the ranking of other sites. Whether or not this is applied by a search engine or other system will depend on how effective it is in practice, just as links are now.

Browser Features

[TBDWhile many approaches involve behavior from browsers (perhaps via extensions), few require any agreement among vendors (i. e., Something supporting variousstandardization).

One exception is that browsers could manage a collection of independent credibility assessment service modules, perhapstools (something like extensions) which could work together to guide and inform the user.] This kind of feature would typically be pioneered in browser extensions and then later adopted into browsers to increase the user base.

Reading List

[Please add more, as long of you’ve read it and honestly recommend it as relevant. Some description is also helpful.]

Other reading lists and literature reviews:

Getting Started

Information Disorder: Toward an interdisciplinary framework for research and policy making. Claire Wardle and Hossein Derakhshan, of First Draft, Oct 2017. Includes specific recommendations for technology companies.

A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles, April 2018  Web Conference by Claire Wardle and Hossein Derakhshan of First Draft includes specific recommendations for technology companiespaper from CredCo members experimentally testing 16 signals, 8 inspection-based. https://knightfoundation Video of paper presentation, by An Xiao Mina.org/reports/american-views-trust-media-and-democracy http://www.ethanzuckerman.com/blog/2017/08/17/mistrust-efficacy-and-the-new-civics-a-whitepaper-for-the-knight-foundation/ 

Credibility Coalition, Our toolkit for people and teams tackling misinformation online (including “MisinfoMap”).

American Views: Trust, Media, and Democracy (Knight Foundation, Jan 2018).

Mistrust, Efficacy and the New Civics (Ethan Zuckerman, MIT Center for Civic Media, Whitepaper for the Knight Foundation, Aug 2017).

The Partisan Brain: Why People Are Attracted To Fake News And What To Do About It

Truth Decay (. RAND Corporation report), Jan 2018. Also, 3 minute video and 12 minute video.

Wikipedia articles on machine-readable data and machine-readable documents. In the latter, see especially the four essential characteristics of trustworthy business records set forth in ISO 15489, Information and documentation - Records management.

Books

Propaganda, classic and prescient 1928 book by Edward Bernays, the father of Public Relations.

Web Literacy for Student Fact-Checkers (2017) by Michael A. Caulfield.

Design Solutions for Fake News is a sprawling (200+ page) crowd-sourced Google-Doc, initiated by Eli Pariser after the 2016 US Election.  It approaches many of the same ideas as this document, from some different angles.

The Righteous Mind (2012), by Jonathan Haidt.

Factfulness: Ten Reasons We’re Wrong About the World -- and Why Things Are Better Than You Think, (2018) by Hans Rosling.

Ongoing: Journals, Newsletters, Blogs, Podcasts, …

Acknowledgements

Group Participants

We are grateful for everyone who participated, asking questions, making suggestions, taking notes, and especially sharing their knowledge and ideas. The following people attended at least one meeting of the group, according to group attendance records, either as a member or an invited guest: (The editor apologizes in advance for any omissions; corrections welcome): Amy Guy, Amy Zhang, An Xiao Mina, Annette Greiner, Aviv Ovadya, Ben Werdmuller, Caroline Burle, Cheryl Langdon-Orr, Chris Needham, Christopher Guess, Cong Yu, Connie Moon Sehat, Dan Brickley, Daniel Schwabe, David Karger, Davide Ceolin, Ed Bice, Ed Summers, Farnaz Jahanbakhsh, Giovanni Luca Ciampaglia, Greg McVerry, Jesse Kranzler, John Connuck, Jon Udell, Katie Haritos-Shea, Matt Lee, Meredith Carden, Mevan Babakar, Michael Golebiewski, Michel Weksler, Newton Calegari, Nick Pickles, Owen Ambur, Patrick Hayes, Reto Gmür, Sally Lehrman, Sam Boyer, Sandro Hawke, Sara-Jayne Terp, Scott Lowenstein, Scott Yates, Stuart Myles, Symeon Papadopoulos, Tantek Çelik, Tim Weninger, Timothy Cowlishaw, Ting Cai, Tom Gilbert, Tzviya Siegman, Vagner Diniz, Vinny Green, Zoë Triska.

Review Comments

We are grateful for review comments from these individuals, some of whom are in the group.  These people do not necessarily endorse this document.  Please add your name/affiliation here, in alphabet order,  if submitting review comments: Amy Guy, Amy Zhang, An Xiao Mina, Annette Greiner, Aviv Ovadya, Connie Moon Sehat, Davide Ceolin, Erica Anderson, Georg Rehm, George Linzer, Humphrey Obuobi, Jon Udell, Kelly J. Cooper, Owen Ambur, Scott Yates, Symeon Papadopoulos.

Change Log

2018-10-11

2018-10-09

2018-10-08

2018-10-07

2018-10-06

2018-10-04

2018-10-01