Misinformation poses a significant challenge in social media, particularly concerning health-related topics such as the COVID-19 pandemic. The spread of unverified information can sway public opinions and influence behaviors, leading to potentially harmful consequences. For instance, misinformation campaigns advocating against vaccinations, often based on partial or misinterpreted data, have succeeded in dissuading individuals from getting vaccinated, thereby increasing their susceptibility to health risks. Artificial Intelligence methods, including Language Models, have emerged as valuable tools for identifying and mitigating the impact of such malicious information. However, the effectiveness of these detectors (especially the ones based on Neural Architectures) can be hindered by the limited availability of labeled training examples. Furthermore, the expertise of domain experts is crucial in verifying the accuracy of information in this context. This research proposes a Neural Active Learning framework with explanation capabilities to fight fake news, including misinformation related to COVID-19. Active Learning allows for enriching the training set by strategically selecting the most informative instances to submit to the expert. Explanation methods serve a dual purpose: aiding operators in the labeling process and guiding the selection of informative instances for Active Learning. Specifically, Explanatory Active Learning is leveraged for the latter objective. Experimental evaluations conducted on real datasets from the health domain focusing on COVID-19 misinformation, demonstrate the effectiveness of the proposed solution in detecting and mitigating the spread of fake news.

DALEK: combining deep active learning and explanations methods for fake news detection on COVID-19

Comito C.;Guarascio M.;Liguori A.;Pisani F. S.
2026

Abstract

Misinformation poses a significant challenge in social media, particularly concerning health-related topics such as the COVID-19 pandemic. The spread of unverified information can sway public opinions and influence behaviors, leading to potentially harmful consequences. For instance, misinformation campaigns advocating against vaccinations, often based on partial or misinterpreted data, have succeeded in dissuading individuals from getting vaccinated, thereby increasing their susceptibility to health risks. Artificial Intelligence methods, including Language Models, have emerged as valuable tools for identifying and mitigating the impact of such malicious information. However, the effectiveness of these detectors (especially the ones based on Neural Architectures) can be hindered by the limited availability of labeled training examples. Furthermore, the expertise of domain experts is crucial in verifying the accuracy of information in this context. This research proposes a Neural Active Learning framework with explanation capabilities to fight fake news, including misinformation related to COVID-19. Active Learning allows for enriching the training set by strategically selecting the most informative instances to submit to the expert. Explanation methods serve a dual purpose: aiding operators in the labeling process and guiding the selection of informative instances for Active Learning. Specifically, Explanatory Active Learning is leveraged for the latter objective. Experimental evaluations conducted on real datasets from the health domain focusing on COVID-19 misinformation, demonstrate the effectiveness of the proposed solution in detecting and mitigating the spread of fake news.
2026
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Deep active learning
EXplanaible artificial intelligence (XAI)
Fake news detection
Interactive active learning (XIL)
Misinformation
File in questo prodotto:
File Dimensione Formato  
2026_nca.pdf

solo utenti autorizzati

Descrizione: published version
Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.72 MB
Formato Adobe PDF
2.72 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/573661
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact