Sci Rep. 2025 Apr 16;15(1):13061. doi: 10.1038/s41598-025-95858-2.
ABSTRACT
In the digital age, privacy preservation is of paramount importance while processing health-related sensitive information. This paper explores the integration of Federated Learning (FL) and Differential Privacy (DP) for breast cancer detection, leveraging FL's decentralized architecture to enable collaborative model training across healthcare organizations without exposing raw patient data. To enhance privacy, DP injects statistical noise into the updates made by the model. This mitigates adversarial attacks and prevents data leakage. The proposed work uses the Breast Cancer Wisconsin Diagnostic dataset to address critical challenges such as data heterogeneity, privacy-accuracy trade-offs, and computational overhead. From the experimental results, FL combined with DP achieves 96.1% accuracy with a privacy budget of ε = 1.9, ensuring strong privacy preservation with minimal performance trade-offs. In comparison, the traditional non-FL model achieved 96.0% accuracy, but at the cost of requiring centralized data storage, which poses significant privacy risks. These findings validate the feasibility of privacy-preserving artificial intelligence models in real-world clinical applications, effectively balancing data protection with reliable medical predictions.
PMID:40240790 | PMC:PMC12003885 | DOI:10.1038/s41598-025-95858-2