Home Up

Minutes: Credible Web CG (18 April 2018)

Topics

  1. Approval of last week's minutes
  2. Intros
  3. Related projects
  4. Technical approaches
  5. Next meeting

Minutes

# [Sandro Hawke] https://github.com/w3c/credweb/blob/master/agenda/20180418.md

Main item: talk about specific approaches to our work. If you can, please fill out the survey first: https://goo.gl/forms/aPUWPZD9doTUYT3A3

# [Scribe] testing

# [Gregg Kellogg] regrets for today's meeting.

# [sam boyer] i will be there for the meeting, but a bit late - my daughter is home sick from daycare

# [rhiaro] present+

# [Jon Udell] present+

# [Sandro Hawke] thanks @Gregg Kellogg and and okay @sam, understood

# [Sandro Hawke] present+

# [Vagner Diniz] present+

# [Sandro Hawke] testing

# [Stuart Myles, Director of Information Management at the AP] present+

# [An Xiao Mina] present+

Approval of last week's minutes

# [rhiaro] @sandro says Normally I'd ask for a resolution, but since I only sent the draft last night people probably haven't had ac hance to look.

# ... anyone who wants to look at it and hasn't had a chance to?

# ... just the zulip transcript. Is it readable for everyone?

#PROPOSED: approve last week's minutes

# [Stuart Myles, Director of Information Management at the AP] +1

# Sandro Hawke says type +1 if you're fine with it now, -1 if you want another week

# [Jon Udell] +1

# [John Connuck] +1

# [Vagner Diniz] +1

# [An Xiao Mina] +1

# [rhiaro] +0 I didn't read, but trust these people.. ^

#RESOLVED: approve last week's minutes

# [An Xiao Mina] we are credible!

# Sandro Hawke says today's agenda: Intros again, brainstorm existing projects and make a list. We should know about new things asap.

# ... Continuining from last week, and using the survey, about the technical approaches

# ... Any other agenda items?

# ... Scott put together a calendar after last week's meeting

# ... Pretty thorough

# ... (for relevant events)

# ... Put events you're travelling to on that calendar, to try to connect with others from the group

# ... For me, @An Xiao Mina and @rhiaro WebConf next week

# ... Anyone else going?

# Vagner Diniz says yes I'm coming

# Sandro Hawke says At the misinformation track on Wednesday? See you there

Intros

# Sandro Hawke says you can copy and paste from last week if you want

# ... Please type in your name and affiliation and what you're doing here

# [Scott Yates] +1

# [An Xiao Mina] I am An Xiao Mina, call me An. Co-chair with Sandro and completely new to the web standards world but have long been looking at misinformation and social media. Director of product at Meedan, where we build tools for collaborative verification, and affiliated with the Berkman Klein Center, have been looking at networked journalism and networked movements in global contexts. Co-founded the Credibility Coalition, which helped formulate this group. Our focus is on statistical validation of indicators of content credibility that journalists and media researchers are considering, through an annotation workflow that we tested with 40 articles. Will be presenting at the Web Conference (just there on April 25), and shortly after Perugia, Italy last week for the International Journalism Festival.

# [Sandro Hawke] I'm @Sandro Hawke co-chair of this group, at W3C, funded in this work by Google and Facebook, background in web development and semantic web, and standardizing semantic web

# [rhiaro] I'm Amy Guy / rhiaro. Previously of W3C and Social Web WG, and University of Edinburgh, doing decentralisation stuff and linked data stuff. Now I work for OCCRP (the Organised Crime and Corruption Reporting Project), an investigative journalism org, but I'm not representing them here. I know things about web standards and data wrangling but not about credibility, journalism, or news so much.. but learning.

# [sam boyer] (i also studied at university of edinburgh in 2005)

# [Stuart Myles, Director of Information Management at the AP] I am Stuart Myles, Director of Information Management at The Associated Press in NYC. I direct metadata strategy throughout AP's global news operations. I am also Chairman of the Board of the IPTC, the news technology standards body, where a particular area of interest is trust in the media. https://iptc.org/

# [Scott Yates] *My name is Scott Yates, Entrepreneur In Residence for CableLabs, the research arm of the cable/broadband industry. I'm interested broadly in any solution that increases trust in the internet, so much of which is transported over broadband wires from cable providers. *

My background is in journalism (NYU and several print publications) and three startups.

# Davide Ceolin says he's from CWI in Amsterdam, semantic web, csv on the web WG

# [Sandro Hawke] credibility coalition

# Sandro Hawke says if you can think of a related project, let's make a list

# [sam boyer] fiskkit

# ... just type it into zulip

# [sam boyer] hearken notebook

# ... later we can expand into exactly what they are and what the relationship is

# [Scott Yates] I've actually built a list of about 60 different projects

# ... In touch with them about us being added

# [John Connuck] I'm @John Connuck representing Facebook. I'm an engineer on the News Quality & Credibility team based in New York. I've been at Facebook for ~3.5 years initially based out of Seattle, then Berkeley and now New York. This is my first time participating in a W3C group, so I'm very excited to jump in. I sadly did not attend the University of Edinburgh

# Scott Yates says I need to make a couple of tweaks, but I'll have my list up tomorrow

# [An Xiao Mina] Global Council for Misinformation: https://www.globalmis.info/repository, announcement here: https://www.journalism.co.uk/news/global-council-launches-to-document-and-connect-initiatives-fighting-mis-and-disinformation/s2/a720311/

# ... have some categorisations

# Sandro Hawke says We'll see if we can use that list, or fork it, great starting point

# [Stuart Myles, Director of Information Management at the AP] http://sites.ieee.org/sagroups-7011/

# [Jon Udell] http://www.aascu.org/AcademicAffairs/ADP/DigiPo/ is MIke Caulfield's project to teach students how to evaluate what they read online. https://webliteracy.pressbooks.com/ is the textbook.

# [Stuart Myles, Director of Information Management at the AP] ^^^ Standard for the Process of Identifying and Rating the Trustworthiness of News Sources

# [Vagner Diniz] I am Vagner Diniz, head of Web Technologies Study Center at NIC.br. NIC.br is the Brazilian organization responsible for the country code .br domain name registration. Among several researches we carry out one of them is related to data privacy and data protection. Due to next general election coming soon we are providing contributions to avoid a punitive/censorship environment regarding fake news. We led the W3C Data on Web the Web Best Practice Working Group..

# Sandro Hawke says I'd like to have a consensus document where every effort in this space agrees their relationship with each other. Where they compete, where they don't

# ... Seems to me there's al ot of people interested in doing something, and there's an awful lot to be done, so would be a shame to waste effort competing

# ... Anything else you want to add? Maybe we'll look at Scott's list later

# ... That brings is to the main thing today

# [Jon Udell] Jon Udell, director of integration at Hypothes.is (open web annotation). I'm looking for ways to improve the quality of the ClaimReview metadata that's being produced by leading fact-checking orgs, and used (in a limited way) by Google and Bing -- while at the same time, crucially, streamlining newsroom workflow.

# [rhiaro] (deleted)

Technical approaches

# Sandro Hawke says Not many people responded to the survey, bit of a long shot getting people to write up their ideas. Most submissions were mine. I've been thinking about a framework for this space as I talk to people for the last couple of months

# ... part of the reason fro doing the survey was to see if anyone would propose any other frameworks or ways, but that didn't happen

# [Sandro Hawke]

# ... I find there are so many things you could possibly do in this space, to get a handle on it this three way division seems to feel about right

# ... At a very high level, inspection is your'e looking at a webpage, just by looking at it without bringing in any external knowledge, can you tell if it's good or bad

# ... this is something we try to train kids in school to do. Can we do better? Much of the credco work has been in this space. I think the basic story is you try to enumerate the signals you could use when you look at the page to determine if it's true or false, pay a bunch of people to gather data, and use that to train a machine to tell whether thinks tend towards true or false

# Jon Udell says That's what people have been told and taught, his (who?) point is that you need to spend very little time on the page if you read laterally and look out stream for origins, it's often way more productive than trying to inspect the page itself in isolation

# Sandro Hawke says I hear a lot of people who don't like this approach at all, and some who do, and without trying to judge.. personally I think there is value in each of these, and we should pursue each, but not that we should all pursue each, some people are more enthusiastic about others

# Jon Udell says I would combine those by saying one of the things I would like to do by inspection is see how that page situates itself in a wider context. Often that's not possible online because things aren't cited, aren't linked

# Sandro Hawke says The second one, corroboration, focussing on comparing the claims and evidence that you see in this item with other sources. Fact checking, claim review, falls under that. Highlitghting what the claims are and having some ecosystem of factcheckers and connecting claims where they appear everywhere with reviews from factcheckers

# ... this might turn into not professional factcheckers, but your friends or the original sources

# ... The third is reputation, which is focussing on who is saying things

# ... Straightforward if you trust the source, but if you don't know enough about the source to trust it you want a reputation network, or ot know what people say about other people

# ... Gets terribly complicated, all sorts of reasons people aren't open and honest about who they trust

# ... Maybe there's something that can be done. Maybe there's a resistance to pressure via groups, pseudonymity

# ... A particular newspaper might choose to be brave because they can hide behind their lawyers

# sam boyer says At the very highest level we can draw a distinction between commentary on people and commentary on work

# ... you can shape systems differently based on whether the target is a human. We can treat them separately

# John Connuck says It's worth displaying at a publisher and content level as well, in addition to authors

# sam boyer says and at the time of publication, a compound identity

# Sandro Hawke says eg. I generally trust the NYT quite a bit but occasionally they publish something false, and I want to point that out. The fact I generally trust them is a reputation thing, and the fact that I don't trust one article is a corroboration thing

# ... One of the places that might break down is I know the things my friends complain about the NYT about aren't usually false statements, they're representation. More suble things. They're very good at factchecking, but still convey politics sometimes. Eg. a profile of a nazi a few months ago. They viewed as exposing the nazi, but a lot of people I know viewed as sympathising with the nazi

# ... I don't know where that fits in the framework

# ... You could make up a claim, like their story is implying that nazis are okay, which is a claim that one could refute

# sam boyer says It's worth noting that the work originally published needs corroboration, all we have to go on is the earned reputation of the publisher. As the body of work builds, that becomes the basis for reputation

# Sandro Hawke says you could have a purely machine generated reputation score based on corroboration, you don't talk to any people at all, just add up the score over time

# ... I think there's a lot of complex judgements people will have to express, about reputation

# ... Some nuance in ratings, accuracy from group identity distinction

# ... a key thing in reputation is there are a lot of group identity things that get caught up. If we were to automate any of this I'm hopeful we could tease those apart

# ... Trust as a concept is not just about accuracy, it's also about benevolance, if the person is acting on your behalf you trust them

# ... It is reasonable to give somebody a high reputation as being on your side, even though they lie all the time

# ... We might argue that's what we see in politics

# ... So if we can separate those.. i can tell the machine that I trust this person but they lie all the time

# ... Hard problem

# ... All of these could be years worth of research. I hope for this group we can bite off little bits in the near term

# ... ClaimReview does that for corroboration already

# ... I think there's been some effort under the inspection camp that are working already. I don't know of anything that's deployed in the reputation space

# ... How can we advance each of these a bit

# Jon Udell says there isn't really a retraction watch for news is there?

# [sam boyer] (deleted)

# ... In the world of scientific research there is a thing called retraction watch, which alerts people to the fact that a retraction happened

# [Stuart Myles, Director of Information Management at the AP] https://www.poynter.org/tags/regret-error

# An Xiao Mina says news diffs?

# Jon Udell says tracking changes right?

# An Xiao Mina says not quite retractions per se

# Jon Udell says A small piece, but could be useful. Newspapers do print retractions, but you couldnt' get them all and do things mechanically with that data

# [Jon Udell] Q: Is there a RetractionWatch (https://retractionwatch.com/) for news?

# Sandro Hawke says I was talking to some people about scientific retractions last week, at a meeting of scientists talking about trust, and one of the huge problems of retractions in both communities is do the retractions get to the original audience. If you could make the retractions machine readable could you.. they post something bad, then take it down and post a retraction and you can't even see the thing that was bad

# ... What you'd like is that they'd leave it up with clear annotations saying don't look at this without understanding that it's been retracted

# ... Trying to hide it because it's wrong isn't good. But you don't want people finding it with no idea that it's been retracted

# [Stuart Myles, Director of Information Management at the AP] https://www.americanpressinstitute.org/publications/good-questions/correction-strategies-6-good-questions-regret-errors-craig-silverman/

# Stuart Myles, Director of Information Management at the AP says I think it's rare for stories to be completely retracted, but pretty common to correct things

# ... In AP we publish what we initially know, then we publish something else that says 'according to officers on the scene 7 people were affected' and then later we find out only 5 people were affected

# ... Mainly not because there was an error, but because we distributed something that legally we weren't allowed to

# ... We don't totally 'kill' stories for mistakes. We 'kill' for a copyright violation. But for an image we though was real that wasn't, that does happen

# Sandro Hawke says when something has been printed in the newspaper you can't unprint it

# Stuart Myles, Director of Information Management at the AP says Websites which track regret errors. Correctives.

# Sandro Hawke says trying to think if that fits in the three part framework, but I'm not sure

# Stuart Myles, Director of Information Management at the AP says I would argue that publications that issue apologies or correctives are more trustworthy. It doens't tell you whether a particular thing is correct or not, but an indicator that they at some level care about accuracy

# Sandro Hawke says it's a really interesting indicator of trustworthiness of a scientific journal - how many retractions they do

# ... it's tricky

# ... if they retract things all the time are they making a lot of errors, or being really careful?

# John Connuck says this is one reason it would not be easy to automate purely based on corroboration

# ... just looking at whether an article has been factchecked or retracted doesn't necessarily indicate low quality or high quality

# Sandro Hawke says right, even the best sources can be wrong sometimes

# [An Xiao Mina] that's here: http://newsdiffs.org/

# John Connuck says right and the scale of the errors can be hard to judge, or a vast difference in quality

# Sandro Hawke says I'm in general very skeptical about machine learning applied to any of this because.. we as humans can't figure out how to do this, maybe machines can. I don't think we have a lot of history of that being successful

# ... Happy to be proven wrong

# ... They definitely have a role in helping to filter, but having humans in the loop to evaluate the things surfaced as most worthy of attention seems like the kind of architecture we'll do best with

# ... What other things we might be usefully able to do by exchanging data? Circulating corrections is one thing that wasn't on the list earlier

# [Stuart Myles, Director of Information Management at the AP] there is some discussion of corrections / correctives in AP's standards guidelines https://www.ap.org/about/our-story/standards-and-practices (corrections are updates to a current story, correctives are a new story which corrects yesterday's incorrect story or some aspect of it)

# ... I've seen some very specific things one might do to improve claim review

# [Katie Haritos-Shea] Sorry I am coming in late...

# Sandro Hawke says why is that left on the floor? it's more work to present it. Partly it's not very polished and i don't want ot revewal the things I haven't thought as carefully about

# [rhiaro] https://github.com/alephdata/aleph is the tooling

# [rhiaro] https://github.com/alephdata/aleph/issues/334

# Sandro Hawke says I want to put people on the spot about where they stand on approaches. I'd like to hear more so it can help shape where we go

# [Katie Haritos-Shea] Present+ Katie Haritos-Shea

# Scott Yates says We're all wearing various hats, so when I say this I'm wearing my journalism background hat not my cable labs hat

# ... As a producer of content, the more I thought about it when I was in it, it's impossible to segment content in any meaningful way. It's very easy for us because we all come from similar educational backgrounds or whatever to say we know the difference between the onion and the NYT, and it's worthwhile to think about that academically, but at the end of the day it's all just entertainment really

# ... None of the consumers of the NYT or the Onion or Fox news or Breitbart or... in almost no cases is it something really directly applicable to decisions that a person is making for their immediate life, how they're going to feed their family

# ... A word of caution. It's fraught with peril going down the path of trying to come up with a lot of categorisation beforehand. It's just a question of what's the content that is especially maliciously generated

# ... Nutrition labels affecting behaviour vs.. who is saying that there's no arsenic allowed. There's no nutrition label for arsenic, how do we keep it out

# ... If you want to have another discussion about what nutrition labels look like that's fine, but because we have this arsenic problem we need to address that first

# Sandro Hawke says let me try to paraphrase that back. One of the aspects of reputation that humans are best at is recognising bad things, or spreading the word about bad things. You could have a reputation network that is flagging problematic stuff, like: this source is problematic, and this is a person who I trust to recognise what is problematic. You could build a system out of that that would bring down bad stuff pretty quickly. It might also bring down all the good stuff. Is that he kind of thing that your'e hypothesising there?

# Scott Yates says That's I think a possible solution to the underlying problem, but I think the problem with the platforms is that it's harder and harder to recognise .. before everything was turned into content over platforms it was very easy to distinguish between mediums. One of the things that happens with facebook and others is things that come from dispartae sources all kind of look the same, and all have this sembelnce of credibility because they appear in a facebook feed

# ... I have very knowledgeable educated friends share fake news, just because they thought it was funny, and it's so easy to do

# ... Not to take it out of the construct, but I think the issue is how these things work within the platforms

# John Connuck says I tend to agree. That's something we think a lot about. How do we unflatten the news. Part of that is figuring out how do we identify the people who are good actors, and elevate that content, to make it clear when news is being spread that's not coming from a credible source

# ... Somewhat easier to detect patterns of spaminess or adfarms, than it is to differentiate between medium news and very high quality news

# ... I think it's worth thinking about the flip side, not just pursuing misinformation

# Stuart Myles, Director of Information Management at the AP says one thing I'd like to suggest is maybe there are useful signals that don't actually tell you that this is credible or not, but that this is a thing that you need to think about

# ... one of the issues with factchecking is that it takes a bit of time, it doesn't take much time to write a non factual piece. It takes a long time to verify it. Maybe before that verification is complete or to identify that it needs to be verified, maybe there could be useful signals to say this needs to be looked at, or be careful

# ... A lie travels around the globe while the truth is still putting on it's boots... velocity.. if people are excited and they share it, that's one you want to say maybe it's not right

# ... it takes a while to look into it and say yes it turned out to be true

# ... Maybe only one institution knows about it and breaks the news, and it spreads

# ... Eg. when there was a tweet about explosion at the whitehouse that was shared a lot, that wasn't true

# ... Those kinds of signals may also be useful to other platforms, distribution platforms, news orgs

# ... And for things that people aren't sharing, maybe it doesn't matter as much

# Sandro Hawke says Are you picturing that as a signal from the news publisher, or the individuals doing the sharing, or both?

# [Jon Udell] "A lie travels halfway around the world before the truth puts its boots on."

Quote investigation: https://quoteinvestigator.com/2014/07/13/truth/

# Sandro Hawke says So getting into tech, you could imagine anybody who's got data... twitter could automatically recognise that things are being ab it weird and shoudl publish that for other people as a demand to check this out. or individual users could sya this looks very suspicous and twitter could aggregate that

# Stuart Myles, Director of Information Management at the AP says The thing about asking people to flag something as 'fake news' is there's a tremendous incentive for bad actors to flag fake news on stuff they don't like

# ... I don't want to say you can't trust people but you've got to look at the incentives for people to use it

# ... Equally not the government, or any other org... not clear who you ask

# ... But on the other hand askign people to look at something?

# ... If tons of people are sharing something that seems to be a signal that it's worth investigating. It could be totally true. Or an honest mistake, or no a deliberate lie

# Sandro Hawke says One technical solution I picture is if flagging was of the form "I would like this investigated more throughly" and I'm gonna put 20c behind that, it doesn't really matter that they're doing it maliciously or not, there's a reward for data. I don't know how you decide who gets that

# ... But to some extend the instict that I want to investigate that more thoroughly, people have this motivation.. they have to do some work. It gives them some crowdsource thing

# Jon Udell says Just the fact that something is spreading rapidly could be a trigger for prioritising more resources into evaluating that thing, an excellent point

# Stuart Myles, Director of Information Management at the AP says if it's fake news and nobody sees it, does it matter. If tons of people are seeing it that's more important

# John Connuck says I believe that's more or less what we currently do. I'm not 100% sure how articles get queued into uor third party factchecking program (partnered with eg. the AP) my understanding is that's based on user reports and they are prioritised in the queue, but part of it is based on engagement or reach

# [sam boyer] i'm trying to keep up while there's a lull in this micro outage - IMO, conversation calling things out on other social platforms, e.g. Twitter/FB, is always going to happen. if we want to have corroboration/credibility/assessments that are both social and useful, it will likely need to be in some sort of specialized social platform that is specifically designed to handle that kind of discourse. e.g., stackoverflow is a specialized platform for a particular kind of discourse (technical Q&A)

# Sandro Hawke says That is stuff that has been put in place since the 2016 election, or..?

# John Connuck says The third party fact checking I believe started right after the election

# Sandro Hawke says My personal experience of fb has changed a lot since the election in terms of what I'm seeing

# ... I dunno how much that problem is still amajor problem. Still a big problem to tell what's legit, how much is clearly bad actors

# John Connuck says I don't have any analysis, but there's definitely a mix of really obviously fake news and just claims of legitimate publishers that have made mistakes or misreport things

# Sandro Hawke says and of course it's good to fact check things that are true as well, putting more resoruces on checking them out makes a lot of sense

# ... One of the questions there .. facebook has this internal system and a question is whether that's something they're intersted opening up to a larger ecosystem

# ... I know larger than facebook sounds like an oxymoron. There's a diversity on the open web that's not internal to facebook

# John Connuck says I can't speak to the overall appetite towards that, but there's nothing specific to fb about these claim reviews, the content is not unique to fb

# Sandro Hawke says The queue of things that the platform has identified as should be checked is visible to the checking partners? It's not public?

# John Connuck says As far as I'm aware

# Sandro Hawke says That would be an interesting question to ask, whether they would be willing to make that public. What are the possible negatives for the ecosystem?

# ... Fact checkers can understand what to prioritise, but maybe it also empowers people who ar etrying to fly under the radar trying to circulate stuff

# ... Giving the feedback of when they have tripped the wire empowers bad actors

# Stuart Myles, Director of Information Management at the AP says another thing this relates to is that a lot of news is quite boring. A lot of things that people call misinformation is really optimised for shareability. The velocity thing or the frequency is an interesting signal

# ... Some news is incredibly viral and totally legitimate but some of it is not

# ... Some really viral things are not true or not totally true, that's why it's an interesting signal

# Sandro Hawke says A really interesting metapoint is that it's not necessarily asignal of true or false, but a signal of should be checked more carefully

# ... There may be other signals, but having a category of signals correlated with need to be checked or something

# An Xiao Mina says Soroush Vosoughi at the Media Lab has been looking at specific types of spread and how they correlate with misinformation. If a certain claim or link spreads very quickly amongs no follower users, that could indicate a coordinated botnet campaign. Vs when it's going from a very large nodes. Other ways to break down the type of spread into subsignals

Next meeting

# Sandro Hawke says What is going to be most useful to focus on at next meeting? Planned for next week

# ... Zeroing in on claim review, looking at some of the indicators projects

# ... Try to get a feel for what the space looks like, how to define the problem clearly for a charter

# ... Do we want to pick one of these three for next week and maybe people can prepare to present on one or two or something?

# [Stuart Myles, Director of Information Management at the AP] i'm going to be away next week at the IPTC meeting. and looking at our shiny new credible calendar, it seems like a few others will be at the web meeting, so wondering if we will have a meeting next week?

# ... Or we could go into corroboration and claim review

# ... Floor is open

# [sam boyer] maybe it'd be too fractious for us at this stage, but one thing i've found effective is trying to explicitly define non-goals for the group

# Sandro Hawke says any other regrets for next week?

# ... Any preferences which to look into next?

# Jon Udell says corroboration

# Davide Ceolin says I'm a bit wondering about the different areas.. not in one in particular at the moment... interseted both in ways in which a source can look credible so guidelines and ways a source can show its own credibility and ways other sources can determine credibility of a third party

# An Xiao Mina says I don't have anything that stands out at the moment, I saw sam's suggestion of how to think about non-goals is also helpful. What is outside our scope

# ... vs what are we explicity focussing on. It's helpful to define what we're not doing vs what we are doing, there's so much

# [Katie Haritos-Shea] Pro and Con board

# Sandro Hawke says One being the gathering of the data of th eindicators, running the studies, Credibility Coalition is probably gonna do that, actually producing a dataset of how people review pages is probably otu of scope

# ... Anything else?

# Sandro Hawke says we can get into that a bit next week

# Stuart Myles, Director of Information Management at the AP says I'm interested in all of these things, from an AP perspective we're particularly interested in what the trust project is doing, how we know that the news we convey from other people is trustworthy

# Sandro Hawke says One other thing thinking about the agenda for the next few weeks, folks from the trust project or claim review folks to join the call for that particular topic that would help. I'll look into that

# John Connuck says I pretty much would say the same as @Stuart Myles, Director of Information Management at the AP, from my team's perspective. I think it makes sense to start with defining a charter with goals and non goals, before looking at specific things

# Sandro Hawke says I'm thinking of this as look at them influencing the charter, not all the details

# [Katie Haritos-Shea] +1 to John

# ... So ways people indicate that they're trust worthy

# Jon Udell says What would a successful outcome for this group look like from a w3c perspective?

# Sandro Hawke says I think in generally vocabularies have not been terribly successful. None of them that I can think of have got beyond a niche or really transformed an industry

# ... The major successes have not been in vocabularies

# ... One of the reasons I'm talking a lot about process is because I want to figure out a better process. I think being more iterative ... this is why I like claimreview.. none of the vocabulary groups I know have done that, they haven't taken something that's deployed. Harder to get a ball rolling from nowhere.

# Vagner Diniz says I'm not comfortable with indicators. I'm not sure if that would be one of our goals in this charter but I would like to see in this charter at least some guidelines or some kind of best practice for a more trustworthy web

# [Jon Udell] +1 for examining what's already deployed and looking for ways to iteratively improve.

# ... I'm still not comfortable with indicators because I'm not sure it's possible to create indicators to make the web more trustworthy

# [Stuart Myles, Director of Information Management at the AP] sadly i have to drop, so i can run to my next meeting. ttfn!

# ... In this charter I would like to see outcomes like guidelines, use cases, best practices

# Sandro Hawke says I'm trying to imagine a best practice

# Vagner Diniz says For instance one thing is about transparency in news. So the more the news are transparent the more the news are likely good. Probably if you know the source, if you know the author, if you find the date, if the news mentions real people, if you find some signals you have some good practice that you can find in the news that suggests that you may have something trustable

# Sandro Hawke says see you all next week!

# [Sandro Hawke] SUMMARY: The meeting was wide ranging, as we're trying to understand the scope for a charter. I presented a possible three-way split into separable areas (dubbed inspection, corroboration, reputation), and we talked about whether everything fits into those categories (answer: mostly, but not perfectly) and folks shared their thoughts about promising directions to go.