This document specifies various types of information, called credibility signals, which are considered potentially useful in assessing credibility of online information.

This document is automatically assembled from a crowd-source Google doc and various data sources. It may contain completely bogus content. You may prefer the most recent stable release


Comments are welcome and are especially useful if they offer specific improvements which can be incorporated into future versions. Please comment either by raising a github issue or making inline comments on the google doc (easily reached using the pencil πŸ–‰ link in the right margin). If neither of those options work for you, please email your comments to public-credibility-comments@w3.org (archive, subscribe).

Introduction

πŸ–‰

Purpose

πŸ–‰

This document is intended to support an ecosystem of interoperable credibility tools.  These software tools, which may be components of familiar existing systems, will gather, process, and use relevant data to help people more accurately decide what information they can trust online and protect themselves from being misled. We expect that an open data-sharing architecture will facilitate efficient research and development, as well as an overall system which is more visibly trustworthy.

The document has three primary audiences:

  1. Software developers and computer science researchers wanting to build systems which work with credibility data.  For them, the document aims to be a precise technical specification, stating what they need for their software to interoperate with any other software which conforms to this specification.
  2. People who work in journalism and want to review and contribute to this technology sphere, to help make sure it is beneficial and practical.
  3. Non-computer-science researchers, interested in helping develop and improve the science behind this work.

In general, we intend for this document to be:

Credibility Data

πŸ–‰

The document builds on concepts and terminology explained in Technological Approaches to Improving Credibility Assessment on the Web.  Our basic model is that an entity (human and/or machine) is attempting to make a credibility assessment — to predict whether something will mislead them or others — by carefully examining many different observable features of that thing and things connected with it, as well as information provided by various related or trusted sources.

To simplify and unify this complex situation, with its many different roles, we model the situation as a set of observers, each using imperfect instruments to learn about the situation and then recording their observations using simple declarative statements agreed upon in advance. Because those statements are inputs to a credibility assessment process, we call them credibility signals.  (The term credibility indicators is sometimes also used.)

This document, then, is a guide to these signals.  It states what each observer might say and exactly how to say it, along with other relevant information to help people choose among the possible signals and understand what it means when they are used.

Because this is a new and constantly-changing field, we do not simply state which signals should be used.  Instead, we list possible signals that one might reasonably consider using, along with information we expect to be helpful in making the decision.

Example

πŸ–‰

[explain]

Assessing credibility of https://news.example/article-1

   Looking at title

      I consider it to be clickbait

      It's clickbait because it's a cliffhanger

   Looking at article

      It cites scientific research

   Looking at provider

      Established in 1974

      Owned domain since 2006

Factors in Selecting Signals

πŸ–‰

When building systems which use credibility signals and trying to decide which signals to use, there are different factors to weigh.  This section is aspirational; we hope this document will in time provide guidance on all these factors.  

Measurement Challenges

πŸ–‰

There are factors about how difficult it is to get an accurate value[a][b] for the signal[c][d]:

  1. Do people independently observing it get approximately the same value?  
  2. Do observations vary with the culture, location, language, age, beliefs, etc, of the people doing the observation?
  3. Would the same people make the same observation in future months or years?
  4. How much time and effort does it take people to make the observation?
  5. Do people need to be trained to make this specific observation?
  6. What kind of general training do people need (eg a journalism degree) to do it?
  7. How do machines compare to humans in making this observation, in terms of cost, quality, types of errors, and susceptibility to being tricked.

Many of these factors can be measured using inter-rater reliability (IRR) techniques.  When studies have made such measurements, our intent is to include that data in this document.

Here is a table of the data we have.  Excerpts are listed with the relevant signals.

Special: studies-table

Value in Credibility Assessment

πŸ–‰

Another important set of factors relates to how useful the measurement is in assessing credibility, assuming the observation itself is accurate.

  1. Does the signal have a strong correlation to content accuracy, itself determined by consensus among experts[e][f]?
  2. Is it particularly indicative of credibility when used in combination with other signals?  (For example, as part of computing the value of a latent variable.)
  3. Is it conceptually easy for people to understand?
  4. Do professionals in the field think it's likely to be a useful signal?
  5. How dependent are these characteristics on the culture or time period being considered?
  6. How dependent are these characteristics on the subject matter of the information being assessed for credibility?

Feedback Risks (“Gameability”)

πŸ–‰

One should also consider how the overall ecosystem of content producers and consumers might be changed by credibility tools adopting the signal. Once attackers see it’s being used, a signal that works well today might stop working, or even be used to make things worse. See Feedback Risks.

  1. Is it disproportionately useful for attackers (eg viral call to action) ?  If so, making this a negative credibility signal should generally be beneficial
  2. Is it disproportionately expensive for attackers (eg journalistic language) ?  If so, making this a positive credibility signal should generally be beneficial.
  3. Who might get impacted by “friendly fire”?  Even if adopting a signal might — on average — harm attackers more than everyone else, certain individuals or communities who have done nothing wrong might be penalized.  Tradeoffs must be carefully made, ideally in a consensus process with the impacted people.

Interoperability

πŸ–‰

The value of sharing signal data depends on how that signal is used by other systems.

  1. Are others producing data using this signal?
  2. Are there useful data sets available?
  3. Are others consuming data, paying attention to reported observations of this signal?
  4. Are there tools which work with it, eg running statistics?
  5. Is the definition clear and unambiguous, so people using it mean the same thing?
  6. Are there clear examples?
  7. Is there an open history of commentary, with questions and answers, and issues being addressed by various implementers?
  8. Is documentation available in multiple languages?
  9. If the definition is under development, how can one participate?
  10. If the definition could possibly change, who might change it, and under what circumstances?
  11. Are there any intellectual property considerations? See W3C Patent Policy.
  12. Is there a test suite / validation system for helping confirm that an implementation is working properly?
  13. Are there implementation reports, confirming that tools are functioning properly, according to the testing system? (For an example, see ActivityPub).

Publishing Credibility Data

πŸ–‰

TBD, basically follow schema.org technique using json-ld.

Consuming Credibility Data

πŸ–‰

TBD, point to some tools and the relevant specs.  Basically JSON-LD.

Organization of this document

πŸ–‰

Section 1 (“Introduction”) provides instructions for how to use and help maintain this document, along with general background information.

The rest of this document, after the introduction, is a list of signals and information about them, as discussed in the introduction.  The signals are organized into related groups, in hierarchical sections.  At the lower levels of the hierarchy are the signals themselves, while the higher levels provide grouping of the signals, to help people understand them.

One important level of the hierarchy identifies the subject type of the signal.  This is the conceptual entity being examined, considered, or inspected, when one makes the observation being recorded in the signal data.  This could be imagined in different ways: when you are observing a claim made in the 3rd paragraph of an article published in some newspaper, are you observing the claim, the paragraph, the article, the newspaper, or even the author of the article?  In general, we aim for the smallest granularity that makes sense, which in this case would probably be the claim.

At times, it may not be obvious to which subject type a signal belongs, or it could sensibly belong with several different ones.  In this case, it might be moved to a different section in the document as people come to understand it better.  When it’s not clear, there should be links from the places a signal could reasonably be to the place it actually is.

This may require discussion, and might remain open for debate.  When a signal or group of signals makes sense in two places, consider linking it from the places it isn’t, to help people find it.

In many cases, a signal could be seen as a set of similar signals which are not strictly identical. This can be handled by adding additional signal headings with the finer distinction, when necessary. In this case, template statements might appear under more than one signal.

Note that sections may be moved and renumbered.  Do not rely on section numbers remaining the same.  For linking to a part of the document, consider using the gdocs h.xxxxxx fragment ids, provided by the Table of Contents; those should remain stable.  Also, whenever changing a heading, especially a signal heading, if someone might be referring to it by name, please move the old text into a paragraph starting “Also called:”.  

Template Statements

πŸ–‰

The most important thing about a signal definition is to be clear what observation the signal data is recording. If the signal heading is “Article length”, does that mean length in words or bytes or characters or some other metric? Does it include the title? For each signal, we want an easy way to communicate its definition that is short but clear, while being as detailed as necessary.

The technique we use here is to express the semantics of the signal using plain and simple sentences in natural language which convey the same knowledge as the signal data. If you imagine people using credibility software exchanging these statements (perhaps in text messages or on Twitter), you should get the right semantics. You can assume metadata, like who sent it and when it was sent is available, so the statements can include terms like “I” and “now”.

For machine-to-machine data Interoperability, these template sentences and the signal heading are turned into a data schema, after which the JSON-LD/schema.org/sematic web/linked data technology stack can be used.

The statements we use are templates because they abstract over a variety of similar sentences which differ in specific limited ways.  For example, these statements:

  1. I have examined the article at https://example.com/alice and find it highly credible
  2. I have examined the article at https://example.com/brian and find it highly credible
  3. I have examined the article at https://example.com/casey and find it highly credible

are all the same, except in the URL.   We convey this using a template statement, which has a variable portion in square brackets, like:

Tech note

If we (automatically or manually) map this template to a property with the pname :iHaveExaminedHighlyCredible, then the sentence number 2 above would be encoded in turtle as

  • { <https://example.com/brian> :iHaveExaminedHighlyCredible true }.

Alternatively, we could make it a class, but boolean valued properties may be better, so that all signals remain as properties..

The bracketed template expression “[subject]” is required in every template, to indicate what entity is being observed.  Additional bracket expressions can be used when there are other elements of the statement to make variable.  In particular, [string] (for text in quotes) and [number].

(For now, try to just use those three.  Software and documentation is being developed to allow more features. If you find this too restrictive, go ahead and write something else inside the square brackets and we'll deal with it later, but include a question mark so it's clear you knew you were making it up.)

An example needing multiple variables:

  1. https://example.com/alice took 4.75 seconds to load, just now.
  2. https://example.com/brian took 5.9 seconds to load, just now.

could be matched by:

Instructions for editing this document

πŸ–‰

As an experiment, this document is currently set so everyone can edit it, like Wikipedia. It is the Google docs version that is editable. We suggest you change the “Editing Mode” to “Suggesting” (using the pencil icon in the upper-right) until you are quite familiar with this document. You may also comment using the usual Google Docs commenting features.

If you make or suggest any edits to this document, you are agreeing to the W3C Community Contributor License Agreement which has significant copyright and patent implications.

The subsections below give some advice for how to make edits which are helpful.

Expand discussion

πŸ–‰

Each section should begin with a short introduction written with a neutral point of view, reflecting consensus about why the signal might be useful and what the risks might be. To enable consensus among a broad community, the intent is for this text to be developed iteratively, with each contributor adding their perspective while respecting what is already present.

Questions and minor concerns should generally be added as annotations using the “Add a Comment” function, without editing the document. If they become issues requiring back-and-forth discussion, they should be turned into github issues and linked from the most relevant place in this document with a paragraph starting “Issue:”

These discussion sections are intended to be nonnormative. That is, they do not say how software using the signal is required to behave for interoperability. The normative content of this specification is the template statements and the mapping of the statements to RDF.

Add new template statements

πŸ–‰

If you are confident you understand what a signal is intended to measure, and think you can provide a template statement which expresses it more clearly and simply, with little ambiguity, please add a new row to the bottom of the “Proposed template statements” table and add your entry.  Please also put the next higher number in the Key field for reference, and your name in the By field. This “by” field is optional; it is intended to help simplify discussion, telling people who to talk to, and to give some credit. Listing the name of a large group in this field is not particularly useful.  

After adding an entry, for a short time (perhaps a few hours, guided by any comments on it) it’s okay to edit it if you change your mind. After that, please leave it, and just add a new row for the new version. You can put new versions in the middle of the table and use keys like 1a.

Add new signals

πŸ–‰

Once you’re familiar with the structure of this document and all the signals in your area of interest, you may add new signal sections (with a title starting “Signal:” or even new group sections.  (For heading numbering, you can use the “Table of contents” add-on from LumApps to number the headers. Or just leave the numbering for someone else using the add-on.)

When you add a new signal, please copy this table to the new section, and then fill in at least one row to clarify what the signal data conveys.

Key

Proposed Template Statement

By

Contributors

πŸ–‰

Folks who add content to this document are encouraged to add themselves in this section, potentially with some affiliation & credential information.  This also allows the “By” column to stay short, as people can use short forms of names (eg only first or last name, if unique in this doc).

Subject type: Claim

πŸ–‰

This section is for signals about claims.

A claim is “an assertion that is open to disagreement; equivalently, a meaningful declarative sentence which is logically either true or false (to some degree); equivalently, a proposition in propositional logic.” [credweb report]

Claims can be stated (with various decree of clarity) in some content or implied by the content (even non-textual content, like a photograph).

Claims are usually the smallest practical granularity. Credibility data about claims is largely focussed on what other sources have said about that claim, as in fact checking, but could also involve relationships between claims and textual analysis of claim text.

Claim Review

πŸ–‰

The “ClaimReview” model developed at schema.org grows out of the tradition of independent, external fact-checking, as in PolitiFact.  With this model, a fact-checker reviews a claim, typically made by a public figure, and then publishes a review of that claim, a “claim review”. Within schema.org, this parallels other reviews, like restaurant reviews.

[ Can we fit claimreview neatly into this observer/signal model?  It’s a bit of a stretch.  TBD. ]

Subject type: Text

πŸ–‰

Includes: phrase, sentence, paragraph, document, document fragment

A text, in this sense, is a sequence of words, with the usual punctuation, and sometimes embedded multimedia content or meaningful layout, like tables.  That is, it’s a document or portion of a document. As examples, a phrase, sentence, paragraph, document section, book chapter, book, and complete book series would typically each count as a text.  

Signals here concern properties of the text, itself, separate from how it might be published (eg on a Web Page, on a billboard, spoken at a rally) or where it might be published (in some Venue).  The text should be considered immutable: a text (in this sense) doesn't change.  If you take a text and change it, you are making a new text, which needs to be reexamined, to see which observations (and thus which signal data) applies to this other, new text.

Issue: (tech) How to represent texts in RDF?  Options include annotation URL with secure hash, annotation object URL with secure hash, data: URI, etc.

Formality

πŸ–‰

Texts adopt a tone to appeal to their audience and/or attempt to convey how the text should be used. For instance, an academic study is written in formal, verbose and grammatically correct language, while a listicle is short, informal and often humorous. The academic study uses these characteristics to convey authority, while the listicle is intentionally unauthoritative.  

Signal: Formal tone

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [subject article] has a formal tone.

Samantha Sunne

No available data sources found

πŸ–‰

Signal: Correct Spelling

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [subject article] has a formal tone, as measured by correct spelling.

Samantha Sunne

No available data sources found

πŸ–‰

Signal: Correct Grammar

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [subject article] has a formal tone, as measured by correct grammar.

Samantha Sunne

No available data sources found

πŸ–‰

Signal: Informal tone

πŸ–‰

Incorrect or colloquial grammar, slang, and humor are some indications of informal tone.

Key

Proposed Template Statement

By

1

Text of [subject article] has an informal tone.

Samantha Sunne

No available data sources found

Signal: Slang

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [subject article] has an informal tone, as measured by slang.

Samantha Sunne

Example sentence: "In this moment we all learned that Johnny Depp isn't a teen and has no clue what "Bae" means." (Source)

No available data sources found

πŸ–‰

Signal: Informal grammar

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [subject article] has an informal tone, as measured by incorrect, casual or colloquial grammar.

Samantha Sunne

No available data sources found

πŸ–‰

Example sentence: "If you're a Friends fan, you probably know that Ross and Rachel's relationship was...kind of a disaster 95% of the time." (Source)

References or citations

πŸ–‰

Signal: Uses standardized references or citations

πŸ–‰

These standards are required and enforced by professions that demand accuracy, and are typically found in highly researched, and therefore more authoritative, texts. Examples: Legal, academic, or scientific citations, e.g., MLA, APA.

Key

Proposed Template Statement

By

1

Text of [subject article] uses standardized references or citations.

Samantha Sunne

Example sentence: "Changes in body temperature have long been used as an indicator of injury, inflammation or infection in veterinary medicine (George et al., 2014), however, the use of

temperature devices such as rectal thermometers and thermal microchips can be both invasive

and time consuming (Johnson et al., 2011)." (Source)

No available data sources found

Signal: Uses formal but not standardized references or citations

πŸ–‰

Examples: Journalism, nonfiction or explanatory material

Some texts use references extensively, even if they are not written according to a rigid structure. These texts tend to be authoritative but not as authoritative as the texts using the rigidly structured citations. The content of the references is also extremely influential.

Key

Proposed Template Statement

By

1

Text of [subject article] uses references or citations that are not recorded according to professional standards.

Samantha Sunne

Example sentence: "Families that receive benefits are now over $2,600 worse off every year, according to an analysis by the Child Poverty Action Group, an advocacy group." (Source)

No available data sources found

Signal: Few to zero references or citations

πŸ–‰

A text with no references to other materials is original content, which often means it is opinion, personal experience, or even fiction. These tend to be less authoritative than texts with references.

Key

Proposed Template Statement

By

1

Text of [subject article] has few or no references or citations.

Samantha Sunne

One exception is a first-hand account, which can become a primary document for later research. These personal accounts, however, should be vetted and cross-referenced with other sources to evaluate its accuracy.

Example sentence: "The shrine is the work of SUNY Purchase sophomore Phillip Hosang, who, like a lot of students at the school, had long heard rumors about a secret room in a men's bathroom somewhere in the visual arts building." (Source)

No available data sources found

Pronouns

πŸ–‰

Signal: Many or multiple instances of the pronouns "I" or "you"

πŸ–‰

Texts that use the pronouns "I" or "you" are typically opinion, correspondence or personal account. These texts are usually not trying to be authoritative or explanatory, however, they sometimes form a primary document that is used in secondary research.

Key

Proposed Template Statement

By

1

Text of [subject article] has many instances of the words "I" or "you."

Samantha Sunne

Example sentence: "After paying close attention to many of your campaigns, I believe you are united by a desire to get things done to help a lot of people who’ve been left behind." (Source)

No available data sources found

Signal: Few or no instances of "I" or "you"

πŸ–‰

Texts that do not use first or second person are less likely to be opinion content. However, this is no indication of credibility.

Key

Proposed Template Statement

By

1

Text of [subject article] has few or no instances of the words "I" or "you."

Samantha Sunne

No available data sources found

πŸ–‰

"President Trump said he would not overrule his acting attorney general, Matthew G. Whitaker, if he decides to curtail the special counsel probe being led by Robert S. Mueller III into Russian interference in the 2016 election campaign." (Source)

Signal: Vocabulary or reading level

πŸ–‰

Texts with a wide and varied vocabulary, which may include jargon or uncommon words, is an indicator of formal tone.

No definitions provided yet.

Incivility and impoliteness

πŸ–‰

Signal: Incivility

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [subject article] contains verbalized threat to democracy, such as a proposal to overthrow democratic government by force or undemocratic way (e.g. “Obama is a Muslim Agent with Brotherhood Ties. American people must take him down.”)

Tamar Willner

2

Text of [subject article] contains stereotypes, such as calling a person a “faggot,” “terrorist,” or “backward” (e.g. “Muslims are terrorist sympathizers”)

Tamar Wilner

3

Text of [subject article] contains threats to people’s individual rights, such as freedom of speech or personal freedom (e.g. “You foolish Republicans better shut up”)

Tamar Wilner

Source: Oz, M., Zheng, P., Chen, G. M., & Park, R. H. (2018). Twitter versus Facebook: Comparing incivility, impoliteness, and deliberative attributes. New Media & Society, 20(9), 3400–3419. http://doi.org/10.1177/1461444817749516 

No available data sources found

Signal: Impoliteness

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [subject article] contains words in all capital letters (e.g. “Who flew the planes into the towers on 9/11? ILLEGAL IMMIGRANTS!”)

Tamar Willner

2

Text of [subject article] contains profanity (e.g. “hell” and “damn”)

Tamar Wilner

3

Text of [subject article] contain insults or name-calling (e.g. “stupid” or “moron”)

Tamar Wilner

Source: Oz, M., Zheng, P., Chen, G. M., & Park, R. H. (2018). Twitter versus Facebook: Comparing incivility, impoliteness, and deliberative attributes. New Media & Society, 20(9), 3400–3419. http://doi.org/10.1177/1461444817749516 

No available data sources found

Subject type: Image

πŸ–‰

Includes: Picture, Photograph, Drawing, Illustration

Implied association or tone

πŸ–‰

When pictures of people are used, there are often choices about which image to use, and how to manipulate it, to make the person look better/worse or associate them with some positive or negative concept.  Some people have pointed out how media gets to choose, when someone is arrested, whether to use flattering photos provided by supporters or a mug shot provided by the police.

Signal: Flattering image

πŸ–‰

No definitions provided yet.

Signal: Unflattering image

πŸ–‰

No definitions provided yet.

Originality of Photo Used in an Article

πŸ–‰

These signals are designed with the assumption that the image is used in the broader context of a journalistic article.

Originality Types

πŸ–‰

Signal: Most Likely Original

πŸ–‰

Key

Proposed Template Statement

By

1

Image is mostly likely original.

Megan Duncan

No available data sources found

Signal: Appears to be a Copy, with Some Modifications

πŸ–‰

Key

Proposed Template Statement

By

1

Image appears to be a copy of one or more articles, with some portions different or remixed

Megan Duncan

Draft typology of modifications

  • Cropping
  • Changing the lighting
  • Adding contrast
  • Change the colors
  • Color saturation
  • Merging images
  • Object or image has been removed or obscured

No available data sources found

Signal: Quotes Extensively From Another Source

πŸ–‰

Key

Proposed Template Statement

By

1

Text of [quotes extensively from another source, with some original content

Megan Duncan

No available data sources found

πŸ–‰

Attribution of Non-Original Image

πŸ–‰
  • Contains a hashtag

Subject type: Audio

πŸ–‰

Also called: Audio Clip, Sound Clip, Audio Recording

Audio type

πŸ–‰

Editor note: This should probably be abstracted to all different types of contents.

Signal: Audio type is news

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] appears to be news.

Tamar Wilner

No available data sources found

Signal: Audio type is opinion

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] appears to be an opinion piece

Tamar Wilner

No available data sources found

Signal: Audio type is advertising or marketing.

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] appears to be advertising or marketing.

Tamar Wilner

No available data sources found

4 Signal: Audio roles - host

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] has an in-studio host.

Tamar Wilner

No available data sources found

πŸ–‰

5 Signal: Audio roles - reporter

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] has a reporter.

Tamar Wilner

No available data sources found

6 Signal: Audio roles - members of the public

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] has interviews with members of the public.

Tamar Wilner

No available data sources found

7 Signal: Audio roles - experts and/or officials

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] has interviews with expert and/or official sources

Yemile Bucay and Tamar Wilner

No available data sources found

8 Signal: Studio conversation

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] has conversation between host and interviewee who is not a reporter.

Yemile Bucay and Tamar Wilner

No available data sources found

9 Signal: Call-ins

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] has call-ins from members of the public.

Yemile Bucay and Tamar Wilner

No available data sources found

10 Signal: Studio

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] sounds like it was at least partially recorded in a studio.

Yemile Bucay and Tamar Wilner

No available data sources found

πŸ–‰

11 Signal: Outside

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] sounds like it was at least partially recorded outdoors.

Yemile Bucay and Tamar Wilner

No available data sources found

πŸ–‰

Signal: Station/company identification

πŸ–‰

Key

Proposed Template Statement

By

1

Station or company that produced the [audio] is identified.

Tamar Wilner

No available data sources found

Signal: Host/reporter identification

πŸ–‰

Key

Proposed Template Statement

By

1

Host of [audio] identifies themselves.

Tamar Wilner

2

Reporter of [audio] identifies themselves

Tamar Wilner

No available data sources found

Signal: Quoted individuals are identified.

πŸ–‰

Key

Proposed Template Statement

By

1

Individuals quoted in [audio] are identified by name.

Tamar Wilner

2

Individuals quoted in [audio] are identified by affiliation, if being quoted in a professional capacity.

Tamar Wilner

No available data sources found

Signal: Attribution

πŸ–‰

Key

Proposed Template Statement

By

1

[Audio] does not include attribution for the claims made.

Tamar Wilner

No available data sources found

Rhetoric

πŸ–‰

Signal: Proportional rhetoric

πŸ–‰

Editor: These should go to some category that includes both text and audio and video.   Linguistic content.

Key

Proposed Template Statement

By

1

The rhetoric used in [audio] is proportional to the event or situation described.

Tamar Wilner, adapting Credibility Coalition

No available data sources found

Signal: Extreme Exaggerating Rhetoric

πŸ–‰

No definitions provided yet.

πŸ–‰

Key

Proposed Template Statement

By

1

The rhetoric used in [audio] is an extreme exaggeration of the event or situation described.

Tamar Wilner, adapting Credibility Coalition

Signal: Extreme Minimizing Rhetoric

πŸ–‰

No definitions provided yet.

πŸ–‰

Key

Proposed Template Statement

By

1

The rhetoric used in [audio] is an extreme minimization of the event or situation described.

Tamar Wilner, adapting Credibility Coalition

Emotional valence

πŸ–‰

Signal: Extremely negative valence

πŸ–‰

Key

Proposed Template Statement

By

1

The language of the reporter or main speaker in the [audio] is extremely negati