Misinformation, as a problem, does not have its origins in contemporary times. The current information dissemination landscape consisting mainly of social media platforms has transformed misinformation into one of the defining problems of the information age. This brings us to the question - What makes people believe in fake news and why is it such a hard problem to tackle effectively.
News Guide is a browser extension automated on social media platforms that scores news based on "the quality of sources," "the quality of citations," and "the tone used". Unlike other current AI tools, we rank the content on a 0-100 scale and give the user a recommended result instead of only showing credible/not credible.
3 members' team project
Human AI Interaction principles have been incorporated into our design:
We have included explanations throughout the app to clarify the algorithmic process - for both the total scoring and for each individual indicator. We will also provide users the option to read more by linking to our external website which will have our detailed methodology and will also include model cards.
For transparency, the result calculation process is clearly displayed using a pie chart. We will also present a breakdown of the score for each article that is evaluated so as to ensure maximum transparency in the scoring process.
Fairness and Accountability
For accountability, we provide users the option to give feedback in case of an unfair assessment by the algorithm. Having this recourse to human intervention in case of an unfair decision by an algorithm will allow for more accountability and will help foster trust in our solution.
We carefully analyzed Pew Research Center data examining the News Consumption Habits across Social Media platforms and News Platforms Fact Sheet in detail.
Between ages 18-29 get their news from digital devices
Do not fact-check often
Worn out by the amount of news online - News Fatigue
Nearly a third of Americans still regularly get their news from Facebook
We found that digital tools designed to combat fake news could be broadly categorized into the following groups based on their primary focus: Bot/Spam Detection, Credibility Scoring, Fact-checking, Education/Training, Verification, Whitelisting, and Codes/Standards19. To further evaluate AI-based solutions built for tackling misinformation, we chose to explore two of these categories in depth - Credibility Assessment and Fact-Checking Tools.
Above were not succinct and brief in the information that they provided.
We ran initial assessments on their usability and ease of information access and found that it is difficult to gather information at a glance from these websites.
The tools above that particularly interested us, however, were tools that straddled the boundaries of both Fact-Checking and Credibility Assessment. We found several examples of such tools and chose the Factual and Newstrition to be the two applications we dove deeper into.
1. Understand participant news consumption habits on social media.
2. Learn how often people evaluate news and what motivates them to do so.
3. Get insights about their current news evaluation methods and challenges in using them.
4. Gather expectations for a news evaluation tool.
All in the age group of 18-29.
4 KEY POINTS
Thoughts on the relevancy of the problem of misinformation
80% said Misinformation is a crucial issue which is complex to solve.
50% said There is just too much information online and information is subjective which can make it hard to distinguish the truth and what is right or wrong.
40% said Marketing strategies and political agendas as driving factors for this problem which adds an extra layer of complexity in solving it.
Attitudes towards News Evaluation
Motivation and interest were two majorly cited reasons which pushed people to critically evaluate news that they consume online.
Check hotly contested current affairs so that they can make up their mind about the issue on their own.
Time as an important factor when it comes to evaluating news.
“I do not have time to evaluate” was a common refrain.
Tend to evaluate information that is relevant or specific to their context.
Current News Evaluation Strategies
Google Search was mentioned as the most common method for evaluating a piece of news.
Seek out official news sources or trusted sources to verify claims.
Discuss important events or news with families or friends so that they can get more information to evaluate the news article further.
Placed on looking at an issue from multiple viewpoints as it helped them with building context and formulating their own options.
A few participants heavily relied on knowledge based websites like Wikipedia and Youtube for further news evaluation.
Concerns or Expectations from an AI-based News Credibility Assessment Tool
100% said I would not trust any tool by itself 100% as these tools themselves could be biased.
Tool would have been influenced by the developer’s or designer’s own biases, political agendas or general outlook on contentious issues.
Present with multiple viewpoints or coverages of the same issue.
Help them evaluate information at a glance and does not worsen the information overwhelm that they feel.
The literature review, competitive analysis and interview that we conducted as part of generative research has helped us to redefine our goal as follows:
Discover the challenges and barriers that users face in the adoption and sustained use of News Credibility Evaluation Tools.
Create a prototype of a fact-checking tool that would integrate seamlessly into the user’s news consumption environment.
After revising our goal, we designed an observational study and an online survey to evaluate existing tools and assess evaluation criteria that should be included in the news credibility assessment tool we design.
What tools people use to evaluate news on social media.
What are their reasons for or concerns in using these tools.
6 social media users in various occupations who are regular consumers of news on social media and all in the age group of 18-29.
Each participant was directed to browse news posts on a test Facebook account which we had set up for the purposes of this experiment. We then asked them to go through the following two tasks:
Evaluate an article of their choice using their preferred method for assessing news articles.
Evaluate the same article from Task 1 using the evaluation tools - The Factual and Newstrition.
Preferred news evaluation method
Convenient and comfortable for them to use.
Big repository of articles in Google.
Overloaded information on Google Search.
Need to think of right keywords for efficient search.
Participants tend to look for further articles which report the same news, even better if the other news outlets are trustworthy.
We decided to incorporate selected Google search results into our system.
Factors that make people not trust the news
Motivation of using evaluation tools
Reasons of rising motivations
Result will show automatically after installing the tools.
More awareness about the credibility of news on social media.
Comparison of The Factual and Newstrition
Comfortable to read/use
(rating from 1 to 5)
Indicators clear to you
(rating from 1 to 5)
Something we can learn from
Each section with representative colors
Smooth user experience
Shows the summary result directly on each social media post
Easy setup without much initiative on the user’s end
Divides information in different box sections with colors
Need to be improved
Lack of understandability
Lack of transparency on the result
Low color contrast interface
Lack of transparency
Complicated user flow
The most mentioned concerns are
“Don’t understand how the result comes out ”
“Don’t understand the meaning of the result”
“Don't know how those indicators help to build the truth of news”
News Assessment Factor Survey
Look deeper to figure out the weight of importance of news credibility factors.
We got 38 responses in total with about an even distribution of males and females.
While evaluating a news article, which of the factors matter to you?
Importance of the factors by calculating weighted score
Weighted Score =
count of Yes*1 +
count of Maybe*0.5 +
count of No*0
How does the recommended result / score come?
From the survey, the seven top factors - Sources, Tone of articles, Title, Date of publication, Evidence cited, Source bias, and Meaningful quotes, had over 90% of positive responses. We decided to use these factors in our main content.
The percentage of each factor is determined by the weighted score that we previously calculated in the result analysis of News Assessment Factor Survey.
Ideate & Design
Participants liked The Factual’s thumbnail on the Facebook post and they favored the simple and intuitive flow of the extension. We decided to follow a similar flow as The Factua as follows:
Thumbnail of Credibility Score will be showed automatically on the right top of the posts.
Hover information of thumbnail will pop up and show evaluation criteria and total score / recommended result.
The user can click the view summary button to see the complete assessment content.
THUMBNAIL OF CREDIBILITY SCORE
HOVER INFO OF THUMBNAIL
We divided the assessment content into few sections:
It's basic but crucial. Users can judge by themselves while looking at these info since site bias might affect their trust on a news article according to the previous interview feedback.
The section provides more information on each individual indicator and what their calculated score is which contributes to the overall credibility scoring.
We believe user feedback and control are critical to improving the underlying AI model’s output and user experience. When users have the right level of control over the system, they’re more likely to trust it.
Hint Design for Explanation
Adding hints to each indicator to increase the explainability of assessment factors.
Tab Design for Related Information
According to observational study result, people tend to read more related sources to evaluate news and they prefer Google for this. In order to increase the convenience and usability, we incorporated this into our extension so that users can directly browse Google articles and videos in our extension.
Checking the usefulness and efficiency of our proposed design solution so as to make any changes or improvements as required.
We recruited 6 participants who are also avid social media news consumers. Participants included both genders and fell in our target age group of 18-29.
Moderator asked the participants to perform FIVE tasks and think-aloud their thoughts during the process, using the high-fi user interface we designed.
Thumbnail of Credibility Score
Hover Info of Thumbnail
Related Articles Page
Related Videos Page
Evaluate the Thumbnail of Credibility Score on Facebook
Browse Facebook as usual and observed their reaction while noticing the thumbnail of overall credibility score.
5 out of 6 notice the thumbnail.
Color alerts them to stay doubtful of the post.
Logo is unclear and resembles a foreign language character which confused them.
Evaluate the Hover Information of Thumbnail
Walk through the evaluation process in their own way and then asked them about the thought of hover information needs.
Hover information is concise and clear.
50% misinterpreted the meaning of date posted
View Summary button with black color is not appealing.
The brief summary should show all the indicators
Evaluate Summary Page
The hint of credibility score helped them understand the relationship between the statement and numerical score.
All the participants said the hint besides the Credible wording is clear.
5 out of 6 stated that the hint gave them a good mental model of the working of the extension.
The graph about the weight for each indicator is especially useful for understanding the methodology.
4 participants pointed out that the wording “External Sources” didn’t match with the scale (medium) we use.
They suggested that“ Citations Quality” better match the quality of cited sources as our intention.
Color representation should be consistent as other indicators.