Author Name : Syeda Sadia Tabassum , P. Malar
Copyright: ©2025 | Pages: 38
DOI: 10.71443/9789349552487-06
Received: 21/11/2024 Accepted: 31/01/2025 Published: 12/03/2025
Federated Learning (FL) has emerged as a promising solution for privacy-preserving collaborative machine learning in decentralized networks, particularly within healthcare systems. This book chapter explores the integration of FL in urban health monitoring, emphasizing the critical role of privacy-preserving techniques to mitigate risks associated with sensitive health data. With the increasing adoption of decentralized healthcare networks, privacy concerns related to data sharing and model updates have become paramount. This chapter addresses key privacy threats in Federated Learning, such as adversarial attacks, data leakage, and malicious model manipulation, while proposing robust mitigation strategies. The application of differential privacy, secure aggregation protocols, and anonymization techniques was discussed, alongside their challenges in maintaining model accuracy and performance. Additionally, the chapter highlights the trade-off between privacy preservation and computational overhead, underscoring the need for efficient solutions that balance both. Through a comprehensive analysis, this chapter offers insights into the future of Federated Learning in healthcare, advocating for stronger privacy guarantees, secure collaboration, and the advancement of machine learning models to enable effective urban health monitoring.ÂÂÂ
Federated Learning (FL) has become a pivotal framework in modern machine learning, particularly in healthcare, due to its ability to preserve data privacy while enabling collaborative model development [1]. As healthcare systems transition to decentralized models, FL provides a solution that allows different institutions, devices, and users to collaboratively train models without the need to share sensitive data [2]. This was crucial in healthcare, where patient data privacy was a legal and ethical necessity [3]. The rise of urban health monitoring systems, which collect and analyze health data from multiple urban sources such as hospitals, wearable devices, and public health organizations, highlights the need for privacy-preserving technologies like FL [4]. while FL offers significant privacy benefits, it also brings challenges regarding the security of data during model training and the potential for adversarial attacks [5].
One of the fundamental challenges associated with Federated Learning in healthcare was ensuring the confidentiality of sensitive patient information [6]. In a typical centralized machine learning setting, data was collected and stored in a single location, making it vulnerable to data breaches [7]. FL eliminates the need for such centralized data storage, allowing data to remain on local devices or servers while only model updates are shared [8]. This approach significantly reduces the risk of exposing sensitive information [9]. even with local data storage, there are still risks associated with the exchange of model parameters, which could potentially reveal private insights about individual patients [10]. To address this, advanced privacy-preserving techniques, such as differential privacy and secure aggregation protocols, must be employed to ensure that the privacy of health data was maintained throughout the learning process [11].