This extension will analyze truthiness of an article and detect fake news.
Unpartial is an AI powered article fake news evaluation tool. One of the hardest challenges for an AI trying to learn about the world around it, is telling how much it should trust what it is reading in documents and the news. Simply having access to all of the internet for fact finding isn't enough. A system needs to be able to remedy conflicting information, and fake news. Unpartial wasn't built for humans, but it works well and so the robot overlords thought we should share.
Each time you press the Unpartial button, our AI checks the trustworthiness of the statements in the article, creates a 350 character summary based on our TLDR Summarization technology, and let's you know a few of the concerns it has with the article. This include author bias on gender, unsupported opinions, and other telling factors about the post.
What isn't Unpartial?
Fact checking. While fake news can be well written and seem highly credible, 99% of the time it isn't. While the Recognant AI that powers Unpartial can do fact validation, it is expensive, and 99% of the time isn't needed for faking suspect news. One of the primary reasons this is true is that the goal of fake news is to convince the reader of something. To do that the fake news is authored with a strong bias towards their position. Good journalism lets facts speak for themselves. This doesn't mean that a bias article has false information, but an article that is one sided is false by omission.
How does it work?
Unpartial uses part of Recognant's AI to evaluate articles. The evaluation looks at a lot of factors in how the article was written. This includes things like how grammatically correct the article is, how biased it is towards a position, how factually dense it is, and if the article contains subjective statements. There are other factors, but these are some of the primary ones.
What do the results mean?
The system will return several possible results based on what it has determined about an article:
Suspect Source: While this article may or may not be true, this author or site has a very, very, high number of fake articles.
Likely Satire, Parody, or Sarcastic: The tone of this article seems to imply sarcasm or parody. As such even if some of the sentiment is true, some of the statements may be false or exaggerated to be funny, or satirical.
Click Bait: The article uses a headline that is designed to be bombastic or draw viewers. This often means it is misleading. The contents of the article may or may not be true, but it should be viewed with skepticism.
Opinionated/Biased: The author shows substantial bias towards swaying the reader to a particular view. While statements may be true individually, the result is often false by omission.
Author fails to be definitive: The author hedges statements such that they aren't definitive. This could be because the facts are unknown, or there is speculation. It can also be the result of the story being about future events that may not happen.
Limited supporting facts: There aren't enough facts to support the position of the article, or the article is not a factual report. (It may be an editorial, or a press release)
Based on these and other factors the system assigns a trustworthiness as:
Seems Legit: You can likely cite this as a source
Consider a more reputable source: While there is no strong indication that this is not trustworthy, it doesn't appear to be a source you would want to cite. Likely there is a better source for any information contained in this article.
Seems Sketchy: Several red flags make this article unstrustworthy.
Super Shady: There are quite a few red flags that make this article untrustworthy.
Fake News: This article contains so many red flags the system believes there is no chance the article is true.
New in 1.2:
Fewer false positives from related articles on the page.
Added Pyschographic Bias data.
#FakeNews #AI #NLP #NLU #Truthiness