Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning
Metadata
Show full item recordAuthor
Rodríguez Barroso, Nuria; Martínez Cámara, Eugenio; Luzón García, María Victoria; Herrera Triguero, FranciscoEditorial
Elsevier
Materia
Federated learning Deep learning Adversarial attacks Byzantine attacks Dynamic aggregation operator
Date
2022-02-24Referencia bibliográfica
Published version: Nuria Rodríguez-Barroso... [et al.]. Dynamic defense against byzantine poisoning attacks in federated learning, Future Generation Computer Systems, Volume 133, 2022, Pages 1-9, ISSN 0167-739X, [https://doi.org/10.1016/j.future.2022.03.003]
Sponsorship
R&D&I grants - MCIN/AEI, Spain PID-2020-119478GB-I00 PID2020-116118GA-I00 EQC2018-005-084-P; ERDF A way of making Europe; MCIN/AEI FPU18/04475 IJC2018-036092-IAbstract
Federated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data,
is vulnerable to Byzatine poisoning adversarial attacks. We argue that the federated learning model has to avoid those kind of
adversarial attacks through filtering out the adversarial clients by means of the federated aggregation operator. We propose a
dynamic federated aggregation operator that dynamically discards those adversarial clients and allows to prevent the corruption of
the global learning model. We assess it as a defense against adversarial attacks deploying a deep learning classification model in
a federated learning setting on the Fed-EMNIST Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the
dynamic selection of the clients to aggregate enhances the performance of the global learning model and discards the adversarial
and poor (with low quality models) clients.