Investigation of poisonous attacks against network intrusion detection systems

Authors

  • Van Quan Nguyen Faculty of Information Technology, Le Quy Don Technical University
  • Van Cuong Nguyen Faculty of Information Technology, Le Quy Don Technical University
  • Tuan Hao Hoang Faculty of Information Technology, Le Quy Don Technical University

DOI:

https://doi.org/10.56651/lqdtu.jst.v11.n01.359.ict

Keywords:

Adversarial Attack, Robustness of Deep Learning, Network Intrusion Detection System

Abstract

Nowadays, deep learning is becoming the most strong and efficient framework, which can be implemented in a wide range of areas. Particularly, advances of modern deep learning approaches have proven their effectiveness in building next generation smart intrusion detection systems (IDSs). However, deep learning-based systems are still vulnerable to adversarial examples, which can destroy the robustness of the models. Poisoning attack is a family of adversarial attacks against machine learning-based models. Generally, an adversary has the ability to inject a small proportion of malicious samples into training dataset to degrade the performance of victim’s models. The robustness of deep learning-based IDSs has been becoming a really important concern. In this work, we investigate poisonous attacks against deep learning-based network intrusion detection systems. We clarify the general attack strategy, perform experiments on multiple datasets including CTU13-08, CTU13-09, CTU13-10 and CTU13-13. Experimental results have shown that only a small amount of injected samples has drastically reduced the performance of the deep learning-based IDSs.

Downloads

Published

2022-06-24

Issue

Section

Articles

Most read articles by the same author(s)