Home Up

Minutes: Credible Web CG SUBGROUP 1 on Inspection (04 June 2018)

Topics

  1. Credibility Coalition WebConf paper
  2. Promising indicators: Emotionally charged tone
  3. Promising indicators: Clickbait Headline
  4. Promising indicators: Accuracy of Representation of source article
  5. Next Steps

Minutes

# [Sandro Hawke] https://credweb.org/agenda/20180604.html

Credibility Coalition WebConf paper

# [An Xiao Mina] Web Conference deck: https://docs.google.com/presentation/d/11MzfbhicQG5jNwrilr8DXPbFuRVuyw4t1xCowb0HfTo/edit#slide=id.g3880518423_0_0

# An Xiao Mina says: "We reviewed 50 articles for the indicators on slide 12. Slide 20 has the indicators that were found to correlate with credibility"

# .. number of ads doesn't seem to have a correlation with credibility, but aggressiveness of ads does. Next study will follow up in more depth

# Sandro Hawke says: "What makes a good indicator (item 4 on the agenda)? Studies covered correlation to credibility but didn't mention security. We should have a table of "good" indicators. This includes both inter-rater reliability and correlation with credibility"

# An Xiao Mina says: they hired both experts and students to rate and compared the results

# Amy Zhang says: can we add whether an indicator is relevant to multiple users

# Sandro Hawke says: three variables, @Amy Zhang wants to add salience

# Sandro Hawke says: dimensions from agenda are: measurability, accuracy and security

# Sandro Hawke says: does comprehensibility cover what you're talking about?

# Amy Zhang says: "more like relevance"

# An Xiao Mina says: study focuses on articles rating to public health and climate science. There's a body of research to lean on. Domain experts in these areas for points of reference. CredCo members have expertise in these areas

# An Xiao Mina says: next area is disaster response. Starting with the domains with narrower range of factual disagreement

# .. trying to validate the process

# Connie says: "while there are legitimate times when experts disagree about facts, could we support agreed upon ranges? I.e. Harvard says there are 20k deaths in Puerto Rico and New York Times says 70k, but it's definitely not 100.

Promising indicators: Emotionally charged tone

# [Sandro Hawke] "outrage, snark, celebration, horror, etc"

# An Xiao Mina says: indicator was framed as, "Does the article have an emotionally charged tone". Raters highlighted specific passages. Next step is to define this more narrowly (listed as outrage, snark, celebration, horror, etc.)

# .. emotions are very culturally specific

# [Sandro Hawke] culturally specific, subject specific, personal specific

# [An Xiao Mina] https://credweb.org/cciv/r/1#emotionally-charged-tone

# Sandro Hawke says: "This feels like a case where security is possible. I don't think you can really game this indicator."

# [Sandro Hawke] valence, positive vs negative tone

# Connie says: NLP classifier called Vador analyzes "tone" of text (positive or negative). Beyond that, but there's an element of directionality (i.e. is the article trying to invoke some action)?

# Amy Zhang says: we have a related indicator for "exaggerated claims"

# An Xiao Mina says: "I like the valence angle"

# Sandro Hawke says: serious users don't use sarcasm

# An Xiao Mina says: true, but hard to detect. Also not true in an op-ed

# Sandro Hawke says: "Does everyone agree with my assessment that this is a non-game-able (secure) indicator?"

# [John Connuck] * @Amy Zhang says, "it's easy to verify unlike some other indicators"*

# Sandro Hawke says: I mean can bad actors easily change their behavior to get around the indicator

# Amy Zhang says: there are two aspects to security, 1. verifiability and 2. gameability

# An Xiao Mina says: "Can we take any learnings from clickbait and spam?"

# [Sandro Hawke] "get rich" spam, similar triggers

# Amy Zhang says: "There are some good parallels with spam. Both content analysis and contextual analysis to determine whether something is spam"

Promising indicators : Clickbait Headline

# An Xiao Mina says: Clickbait headline is related to emotional tone. @Amy Zhang developed a topology of clickbait types

# [An Xiao Mina] here's one headline assessed as clickbaity by annotators: https://docs.google.com/presentation/d/11MzfbhicQG5jNwrilr8DXPbFuRVuyw4t1xCowb0HfTo/edit#slide=id.g38d3ddfe32_2_30

# Amy Zhang says: There are different definitions of clickbait across sites and platforms, so this includes multiple types of clickbait

# [Sandro Hawke] https://credweb.org/cciv/r/1#clickbait-genres

# Amy Zhang says: this came from existing research, but not a rigorously tested taxonomy

# Amy Zhang says: "This was a clickbaity scale rather than a binary decision"

# Amy Zhang says: Hypothesis is that clickbait is correlated to credibility because it tends to give deference to ads or impressions over truth or user experience

# [Sandro Hawke] Hi-Cred articles can be clickbait-y

Accuracy of Representation of source article

# An Xiao Mina says: "We asked annotators to assess how accurately the scientific sources are represented in the news article. Tough to assess. Requires a lot of work and there are blockers (like initial source is behind a paywall).

# [An Xiao Mina] https://credweb.org/cciv/r/1#accuracy-of-representation-of-source-article

# An Xiao Mina says: "If I recall this is one of the most tightly correlated with credibility"

# Sandro Hawke says: "This seems like one of the most expensive to measure"

# An Xiao Mina says: "Yeah, this is very labor intensive and requires annotators who can do close reading of the text"

# Sandro Hawke says: "Can we separate measurability into IRR and cost/time"

# Sandro Hawke says: "In the study did you measure time spent on indicators?"

# An Xiao Mina says: "Not for specific indicators, only overall"

# Amy Zhang says: "We also need to take into account the credibility of the source"

# Amy Zhang says: "For source credibility, we focused on the impact factor of the journal (which also has problems, but it's the best available measure)"

# An Xiao Mina says: number of citations is also a proxy for credibility

# [Ed Summers] Number of citations is basically the same thing as impact factor.

# John Connuck says: something like pagerank -- eg number of citations -- might help and scale beyond just science journals

# Amy Zhang says: "This is very related to the reputation question"

# [An Xiao Mina] the :turtle: problem is a nice way to talk about some of these indicators

# Amy Zhang says: "You should be able to ask annotators whether the facts match between article and source, which is easier than asking about the recursive credibility/reputation of the source"

# Sandro Hawke says: can you just use the abstract?

# An Xiao Mina says: "It's possible paywalls are actually fueling misinfo because details are behind paywall vs. just the abstract"

# [An Xiao Mina] ^ this could inform a possible focus area for our next study

# Connie Moon Sehat says: post-fukashima ocean currents graphics mis-used

# Connie says: there are high quality citations but the data is misused

# [Ed Summers] Image provenance is an interesting angle: e.g. google reverse image search, tin eye.

Next Steps

# Sandro Hawke says: another meeting like this with more detail

# [Scott Yates] And just for people willing to listen to some country music: https://www.youtube.com/watch?v=LWx6csgGkg4

# Sandro Hawke says: Would people be willing to meet in smaller groups to go through the different dimensions for various indicators?

# An Xiao Mina says: would like to focus on ad indicators. Especially with people with ad-tech experience

# An Xiao Mina, Scott Yates** says: "This size feels good"

# Sandro Hawke says: "Same time two weeks from now?"

# [An Xiao Mina] quick show of thumbs - how useful was this meeting as a focus area? i really appreciated everyone's feedback

# [Davide Ceolin] Pretty much. Among other things, I think that there's a lot of partially overlapping definitions around and it's interesting to clarify and discuss them in detail.