Exploring Adversarial Attacks in Federated Learning for Medical Imaging

Erfan Darzi*, Florian Dubost, Nanna M. Sijtsema, P. M.A. Van Ooijen

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

5 Citations (Scopus)
24 Downloads (Pure)

Abstract

Federated learning provides a privacy-preserving framework for medical image analysis but is also vulnerable to a unique category of adversarial attacks. This article presents an in-depth exploration of these vulnerabilities, emphasizing the potential for adversaries to execute attack transferability, a phenomenon where adversarial attacks developed on one model can be successfully applied to other models within the federated network. We delve into the specific risks associated with such attacks in the context of medical imaging, using domain-specific MRI tumor and pathology datasets. Our comprehensive evaluation assesses the efficacy of various known threat scenarios within a federated learning environment. The study demonstrates the system's susceptibility to multiple forms of attacks and highlights how domain-specific configurations can significantly elevate the success rate of these attacks. This analysis brings to light the need for defense mechanisms and advocates for a reevaluation of the current security protocols in federated medical image analysis systems.

Original languageEnglish
Pages (from-to)13591-13599
Number of pages9
JournalIeee transactions on industrial informatics
Volume20
Issue number12
Early online date28-Aug-2024
DOIs
Publication statusPublished - Dec-2024

Keywords

  • Adversarial attacks
  • deep learning
  • federated learning
  • medical imaging

Fingerprint

Dive into the research topics of 'Exploring Adversarial Attacks in Federated Learning for Medical Imaging'. Together they form a unique fingerprint.

Cite this