Information is disseminated largely through social networks, where immediacy is essential to be the first to spread the news. However, this method, along with the crisis in the media, has brought with it some drawbacks. Since the necessary time is not devoted to contrasting and verifying the information and data that reaches us, so that information is published that is not correct.
Faced with the proliferation of fake news, the media are working to insert false news detection systems into their production routines, with the aim of counteracting them and providing citizens with quality information. We are talking about fact-checking: the process of verifying information, whether it is published news, a political speech or statements in order to determine its veracity and correctness.
The most innovative, like The New York Times, incorporate Artificial Intelligence (AI) to automate the process. Others have journalists who verify the speeches by contrasting the information. An example is the EFE Agency and its EFE Verifica service. In addition, these two methods can be mixed, combining human verification with Artificial Intelligence, such as the Full Fact or Newtral projects.
The UK organization Full Fact, which is developing automated verification tools for use in newsrooms around the world. The project is divided into two branches: Trends and Live. The first registers each repetition of incorrect data, in order to detect and know who is behind the misleading claim. Instead, Live detects statements that appear in the television captions and that have already been verified before. Automatically displays the most recent articles about that information. Likewise, it detects statements that have not been contrasted to carry out their verification.
Its EFE Verifica service analyzes both the political discourse and the content that goes viral on social networks to check if they fit the facts or available data. In addition, they publish information that explains and contextualizes events that generate confusion in public opinion.
The News Provenance Project
The New York Times is developing a project called The News Provenance Project that is based on the blockchain, that is, on a chain of blocks that allows citizens to reach the origin of the content, in this case, of the images. It is a huge database that contains the history of transactions executed on the network and that is distributed among several participants that store an exact copy of the chain. That is, it is almost impossible to alter it.
The Trust Project
Based on in-depth interviews, the project establishes a set of reliability indicators, which offer information about the medium, the journalist and the hidden commitments behind each story. In this way, it is easier for the reader to identify reliable news. In addition, it has external partners, such as Google, Facebook or Twitter, who take into account the indicators to position the content more favourably. In other words, it is a way to reduce the relevance of clickbait and avoid the algorithm of digital platforms.
Newtral’s methodology to carry out its verification work consists of selecting statements from politicians from different parties and public administrations in: newspapers, radio and television interviews, social networks and any public platform. They choose all those statements that are of interest or relevance from a purely journalistic criterion. They value the importance of the statement and the author, if it is repeated as an argument intentionally created to confuse and if it has verifiable content with data. They discard the opinions that are part of the political rhetorical logic.
They check the statements of politicians, economists, businessmen, public figures, the media and viral content on social networks, classifying them as “true” or “false”. It was the first non-partisan site in Latin America to have this purpose.
It brings together the efforts of nearly 40 Latin media outlets, as well as some from Spain and Portugal. The initiative, in addition to developing a website in which the hoax check on the subject is constantly updated, has created a downloadable board game. The LatamChequea Coronavirus project, coordinated by the Argentine organization Chequeado and financed by Google, performs the tasks of checking information, collecting and updating verifications in Spanish about the pandemic.
The European Union has joined the fight and has financed the FANDANGO project. The goal is to aggregate and verify data from news, media sources, social media, and open data, in order to detect fake news and provide more efficient communication to citizens. Big data is stored on a platform based on Data Lake technology, which collects large amounts of raw data to analyze it with AI tools and thus reveal false or misleading news.
The web extension created by the Maldita.es fact-checker alerts users when they consult an unreliable page. The objective of this portal is to provide citizens with tools so that they are not deceived by false news. The Maldita Hemeroteca, Maldito Bulo, Maldita Ciencia and Maldito Dato sections focus on the control of disinformation and public discourse through fact-checking techniques and data journalism. This non-profit association’s main objectives are to monitor and control political discourse and promote transparency in public and private institutions, verify and fight disinformation and promote media literacy and technological tools to create a conscious community that can defend itself from disinformation and is found in all areas.
Where is the innovation in LieSense?
In this context, the LieSense project aims to develop a platform that helps experts detect disinformation content and campaigns, providing a powerful tool to analyze the patterns of disinformation propagation in social media. In addition, LieSense will address the research and practical issues mentioned in the proposal statement by providing a simulation platform for modeling the spread of disinformation. This will provide feedback for the monitoring platform and users, offering more information about this phenomenon.
LieSense’s overall objective is to improve the effectiveness and efficiency of the response to misinformation in the banking and financial domains. This will be achieved using a combination of custom-tailored crawling and visualization tools focused on the dissemination of different types of particular disinformation and trusted fact-checks. Fact-checking is the main tool against misinformation, but it has several limitations. To address them, this project will focus on the four stages of fact-checking: finding claims to check, getting data to check claims, checking claims and monitoring and anticipating claims.
In order to achieve this overall goal, the LieSense project spans many fields of research for fact-checking support: information fusion, data visualization, natural language processing (NLP), social network analysis (SNA), and agent-based social simulation. Breaking down these fields of research, LieSense aims to use a combination of existing analysis techniques, both measuring and analyzing the impact of specific information within the network, sentiment and subjectivity analysis, writing style analysis and other state-of-the-art NLP techniques. It should be noted that while these analyses will be used as queues to filter relevant content, they are not a substitute for manual fact-checking, but an aiding tool. Providing true stand-alone fact checking is beyond the scope of this project. Finally, the results obtained will be accessible from a chatbot interface to improve usability and customization of the visualizations.
|Proposal||Credibility Score||Dissemination||Connections with other platform||Trustworthiness |
|Full Fact||Not available to the public||Through social networks: Twitter, Facebook and Instagram||RSS/Atom and Newslette||News, media and other sources of information|
|Efe Verifica||Not available to the public||Through social networks: Twitter, Facebook and Youtube||RSS/Atom and Newsletter||News, media and other sources of information|
|The News |
|Not available to the public||Through New York’s Times social media profiles||Hyperledger’s IBM and PhotoMetadata||Media and metadata previously stored in its Blockchain|
|The Trust |
|Using 8 indicators but not available to public||Through social networks: Twitter, Facebook, Instagram and Linkedin||RSS/Atom and Newsletter||8 trust indicators checked automatically: Expertise of the journalist, the purpose of the story, check the references and their access, the usage of local references, the diversity of the people involved in the story, the participation of the readers in the comment section, how the story is wrote and if the journalist or news organization explains their ownership and standards|
|Newtral||Not available to the public||Through social networks: Twitter, Facebook, Instagram, Youtube and Telegram||RSS/Atom and Newsletter.|
Whatsapp bot and Telegram bot
|News, media and other sources of information|
|Chequeado & |
|Not available to the public||Through social networks: Twitter, Facebook, Instagram and Whatsapp||RSS/Atom and Newsletter||News, media and other sources of information|
|Fandango||Not available to the public||Through social networks: Twitter, Facebook and GitHub||RSS/Atom and Newsletter||Social networks, knowledge databases, public web, corporate data, news portals, data shippers and ground truth data|
|Maldita||Not available to the public||Through social networks: Twitter, Facebook, Instagram, Youtube, Linkedin, Tiktok and Telegram||RSS/Atom and Newsletter and|
|News, media, and direct contact to specific people just to clarify the news distinguishing between hoaxes, news without evidence, false notifications or disclosure|
|LieSense||Available from 0-100. It can be used as other way of agreement||Visualize the context and data in a personalized way through reports using the chatbot. The content is not previously classified and it is the user who decides what wants to evaluate.||Chatbot||News, online media, social media and social context annotated with sentiment, emotion and other results from Social Network Analysis|