Similarity-Guided Adaptive Local Differential Privacy for Robust Federated Learning in Healthcare

Main Article Content

Mohammed El Amine Beyat, Ahmed Korichi, Mohammed Kamel Benkaddour, Mohammed El Aymene Beyat

Abstract

Federated Learning (FL) offers a transformative approach for healthcare AI by allowing medical institutions to collaboratively train global models without sharing sensitive patient data. However, standard privacy-preserving techniques like Local Differential Privacy (LDP) typically apply uniform noise across all participants, effectively penalizing all clients regardless of their data quality or the reliability of their contributions. This rigid application significantly degrades model performance under the heterogeneous (non-IID) data distributions common in medical settings and fails to mitigate malicious or anomalous updates. In this work, we propose the Similarity-Guided Adaptive Local Differential Privacy (SGA-LDP) framework. By utilizing cosine similarity as a behavior-aware heuristic for client reliability, the server dynamically assigns relaxed privacy budgets to well-aligned updates to preserve model utility, while enforcing stricter noise levels on deviating updates to enhance both privacy protection and robustness. We evaluate the proposed framework on the BloodMNIST dataset using a pretrained EfficientNet-B0 backbone. Experimental results demonstrate that SGA-LDP improves global model accuracy to 84.1 ± 0.6 % and achieves an F1-Score of 0.83, compared to an accuracy of 74.5 ± 1.1 % under static LDP. Furthermore, the framework maintains strong privacy protection with a Membership Inference Attack (MIA) AUC of 0.54 and achieves high robustness against targeted label-flipping attacks with an Attack Success Rate (ASR) of 0.11. These findings indicate that similarity-guided adaptive noise allocation effectively optimizes the  trilemma  balance between accuracy, privacy, and robustness in sensitive healthcare AI environments.

Article Details

Section
Articles