Can AI counteract Health-related Fake News?

HeReFaNMi (Health-Related Fake News Mitigation): an open-source project to counteract health-related misinformation

14:4010/11/2023

HeReFaNMi (Health-Related Fake News Mitigation) is an NGI-Search-funded project to give back trustworthiness to the Internet community by tackling fake news spread. Other than the well-known cyber threats, several factors have been undermining the Internet search experience lately. One of the pandemic’s lessons learned concerns the health-related fake news spread over websites and social media networks. Some nefarious effects came as a non-negligible hesitancy towards national healthcare systems’ guidelines. Since then, several AI-powered solutions have been developed to counteract fake news circulation using supervised and unsupervised learning. The task is challenging due to the need for continuous updating upon introducing new scientific findings. The so-called data drift and catastrophic forgetting also affect the effectiveness of AI-powered classification methods.
LLMs (Large Language Models) have recently made their way through the AI landscape by delivering unprecedented performances over text analytics, mining, question and answering systems, and text generation. However, LLMs suffer from Hallucination, meaning they can elaborate contents that are unreliable as a source of truth even when fine-tuned on scientifically sound datasets.

Presentation

Supported by