The spread of misinformation, especially in the health sector, has reached pandemic proportions. As one scholar, aptly noted, “We are in a pandemic of misinformation and, honestly, there is no one vaccine for it.” [i] The rise of fake health news presents significant risks, affecting both patients and medical professionals. Here we explore the challenges posed by fake health news and potential solutions to mitigate its impact.
The Growing Risk of Fake News
In September 2023, the FBI identified deepfakes as a top risk. The World Economic Forum also ranks misinformation and disinformation as highly impactful global threats over the next two years. A study by Europol predicts that by 2026, 90% of online content could be synthetically generated, posing significant challenges for law enforcement.[ii]
AI and Fake News Detection
AI technologies have made detecting fake news increasingly difficult for mere mortals. Tools like Large Language Models (LLMs) and Generative Adversarial Networks (GANs) are double-edged swords. While LLMs can generate human-like responses, they can also produce misleading information. GANs, designed to improve detection, also enhances the ability to create more convincing deepfakes.[iii]
Impact on Health Sector
The COVID-19 pandemic led to a surge in fake health news, including dangerous miracle cures and vaccine conspiracies. This misinformation continues to proliferate, potentially burying the truth under a blizzard of false claims. For instance, manipulated medical radiology images have been shown to trick both AI models and medical professionals into making incorrect diagnoses, posing severe risks to patient care.[iv]
Regulatory Approaches
Determining liability for harm caused by fake health news is complex. In the EU, efforts are being made to address these challenges through comprehensive AI regulation and social media platform liability. For instance, the EU’s AI Act categorizes AI systems by risk and imposes strict regulations to ensure safety and transparency, particularly for high-risk applications like healthcare. In contrast, the US lacks comprehensive federal AI laws, relying instead on state-level regulations and sector-specific policies like HIPAA, which are inconsistent and in the case of HIPAA, it is simply antiquated.
The EU’s General Data Protection Regulation (GDPR) sets a gold standard for privacy regulation, with the EU AI Act building on these protections. The AI Liability Directive further aids claimants by creating a rebuttable presumption of causality, making it easier to hold AI system providers accountable for harm. In the US, the lack of comprehensive federal privacy laws means that regulations are inconsistent across states. California’s Privacy Rights Act (CPRA) is the most robust state-level legislation, but it only applies to businesses meeting specific financial criteria.
Social Media Platform Liability
Social media platforms play a crucial role in the dissemination of fake health news. The EU is to be applauded for effectively ending social media’s ability to self-regulate by the passage of the EU Digital Services Act and now the new European Media Freedom Act (EMFA.) These laws stand in stark contrast to the Section 230 of the US Communications Decency Act, which provides broad immunity to these platforms, allowing them to moderate content without fear of liability. However, this also means that platforms are not held accountable for failing to remove harmful content. The debate continues in the US over whether there should be public health exemptions to such immunity, especially when platforms refuse to remove fake health news.
Potential Solutions Several solutions have been proposed to combat the spread of fake health news:
- Restricting LLMs to Medical Databases: Using AI systems trained exclusively on reliable medical databases can help ensure that the information generated is accurate and trustworthy. This approach reduces the risk of AI producing misleading or harmful content.
- Learning Prompt Engineering: Training professionals to ask AI the right questions is crucial for obtaining reliable answers. Prompt engineering can help users identify potential inaccuracies and ensure that AI outputs are based on verified data.
- Public Health Exemptions: Considering exemptions to liability protections for platforms that fail to remove harmful fake health news can incentivize social media companies to be more proactive in moderating content. However, this must be balanced with the need to protect freedom of speech and prevent overreach.
- Self-Policing: Encouraging platforms to remove dangerous content, as seen with Amazon’s removal of books promoting harmful cures, is an effective strategy. Platforms have a responsibility to ensure that the information they disseminate does not endanger public health.
- Technological Solutions: Utilizing AI to detect deepfakes and other forms of fake news dynamically is essential. As the methods for creating fake news evolve, detection technologies must keep pace to effectively identify and mitigate misinformation.
After writing the above paper, I uploaded it to ChatGPT 4-0 and asked it to check my paper for accuracy and to ask it whether it had any additional recommendations, with citations. While approving of the recommendations I personally crafted, this LLM gave me the following advice, which is as follows:
Additional Recommendations
- Enhance Media Literacy Education: Increasing public awareness and education about how to identify fake news can empower individuals to critically evaluate the information they encounter. Studies have shown that media literacy programs can reduce the susceptibility to misinformation (source: European Journal of Communication).
- Strengthen International Collaboration: Given the global nature of fake news, international cooperation is crucial. Sharing best practices, research, and technological advancements can help countries combat misinformation more effectively. The World Health Organization (WHO) has emphasized the importance of global collaboration in addressing health misinformation (source: WHO).
- Develop and Implement Comprehensive AI Ethics Guidelines: Establishing clear ethical guidelines for AI development and deployment can help ensure that AI technologies are used responsibly. These guidelines should address issues such as transparency, accountability, and the prevention of harm (source: IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems).
- Increase Research Funding for Misinformation Studies: Allocating more resources to research on misinformation can lead to better understanding and more effective strategies for combating fake news. Funding interdisciplinary studies that combine insights from psychology, computer science, and communication can provide a holistic approach to tackling misinformation (source: National Science Foundation).
[i] Fake News: Medical Quackery Enters a New Dimension: Spread of misinformation evolves, heightening threats to patients and medical science, Melani, Debra , March 27, 2023 quoting Aimee Pugh Bernard, PhD https://news.cuanschutz.edu/news-stories/fake-news-medical-quackery-enters-a-new-dimension
[ii] Deepfakes and Malpractice Risk: Lawyers Beware, de’Medici, Bruce May 2024, American Bar Association. https://www.americanbar.org/groups/science_technology/publications/scitech_lawyer/2024/spring/
deepfakes-malpractice-risk/
[iii] Deep Learning for Medical Image-Based Cancer Diagnosis, Xiaoyan Jiang,et al., Cancers (Basel). 2023 Jul; 15(14): 3608. Published online 2023 Jul 13 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10377683/
Our advice:
The recommendations, including the additional suggestions provided by ChatGPT, form a comprehensive strategy for addressing the fake health news pandemic. By leveraging technology, enhancing education, promoting international cooperation, and developing ethical guidelines, we can mitigate the impact of misinformation on public health. Implementing these strategies requires collaboration between governments, tech companies, healthcare professionals, and the public.
You can view the PowerPoint that matches this article on Janice F. Mulligan’s website at https://www.janmulligan.com/wp-content/uploads/2024/06/JFM-June-19-Final-Fake-news-power-point.pdf