Abstract
Federated learning provides a privacy-preserving framework for medical image analysis but is also vulnerable to a unique category of adversarial attacks. This article presents an in-depth exploration of these vulnerabilities, emphasizing the potential for adversaries to execute attack transferability, a phenomenon where adversarial attacks developed on one model can be successfully applied to other models within the federated network. We delve into the specific risks associated with such attacks in the context of medical imaging, using domain-specific MRI tumor and pathology datasets. Our comprehensive evaluation assesses the efficacy of various known threat scenarios within a federated learning environment. The study demonstrates the system's susceptibility to multiple forms of attacks and highlights how domain-specific configurations can significantly elevate the success rate of these attacks. This analysis brings to light the need for defense mechanisms and advocates for a reevaluation of the current security protocols in federated medical image analysis systems.
Original language | English |
---|---|
Pages (from-to) | 13591-13599 |
Number of pages | 9 |
Journal | Ieee transactions on industrial informatics |
Volume | 20 |
Issue number | 12 |
Early online date | 28-Aug-2024 |
DOIs | |
Publication status | Published - Dec-2024 |
Keywords
- Adversarial attacks
- deep learning
- federated learning
- medical imaging