MITIGATING POISONING ATTACKS TO FEDERATED LEARNING IN IoTs ANOMALY DETECTION WITH ATTENTION AGGREGATION
DOI:
https://doi.org/10.56651/lqdtu.jst.v13.n02.925.ictKeywords:
Anomaly detection, IoT, Attention Aggregation, Federated LearningAbstract
Federated Learning (FL) is a privacy-preserving approach to train deep neural networks across decentralized devices without sharing raw data. Thus, FL has been popularly applied in domains like anomaly detection in Internet of Things (IoTs). However, IoT networks or devices have limited protection capabilities, resulting in the vulnerability of FL to data poisoning attacks. In order to address this challenge, we propose a new robust FL system designed to counter data poisoning attacks. Our approach, named as Federated Learning with Attention Aggregation (FedAA), leverages AutoEncoder (AE) models for local anomaly detection in IoT networks. In FedAA, the global model is aggregated from local models by using a novel aggregation method, named as Attention Aggregation (AA). This method is specifically designed to mitigate the impact of data poisoning attacks, which often lead to high values of the loss functions in the local models. More precisely, the local models with high loss values are assigned lower attention weights when contributing to the global model aggregation, and vice versa. As a result, the proposed AA method enhances the robustness of FedAA against data poisoning attacks. The extensive experiments are conducted on three datasets including N-BaIoT, NSL-KDD, and UNSW, of IoT anomaly detection. The results show that FedAA is more robust than other FL systems in mitigating data poisoning attacks.