Have a personal or library account? Click to login
Tackling Non-IID Data and Data Poisoning in Federated Learning Using Adversarial Synthetic Data Cover

Tackling Non-IID Data and Data Poisoning in Federated Learning Using Adversarial Synthetic Data

Open Access
|Sep 2024

Abstract

Federated learning (FL) involves joint model training by various devices while preserving the privacy of their data. However, it presents a challenge of dealing with heterogeneous data located on participating devices. This issue can further be complicated by the appearance of malicious clients, aiming to sabotage the training process by poisoning local data. In this context, a problem of differentiating between poisoned and non-identically-independently-distributed (non-IID) data appears. To address it, a technique utilizing data-free synthetic data generation is proposed, using a reverse concept of adversarial attack. Adversarial inputs allow for improving the training process by measuring clients’ coherence and favoring trustworthy participants. Experimental results, obtained from the image classification tasks for MNIST, EMNIST, and CIFAR-10 datasets are reported and analyzed.

DOI: https://doi.org/10.14313/jamris/3-2024/17 | Journal eISSN: 2080-2145 | Journal ISSN: 1897-8649
Language: English
Page range: 1 - 13
Submitted on: Dec 27, 2023
Accepted on: Mar 11, 2024
Published on: Sep 12, 2024
Published by: Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Anastasiya Danilenka, published by Łukasiewicz Research Network – Industrial Research Institute for Automation and Measurements PIAP
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.