par Debicha, Islam ;Debatty, Thibault;Dricot, Jean-Michel ;Mees, Wim
Référence The Sixteenth International Conference on Systems (ICONS 2021)(18th-22th April, 2021), page (45-49)
Publication Publié, 2021-04-18
Référence The Sixteenth International Conference on Systems (ICONS 2021)(18th-22th April, 2021), page (45-49)
Publication Publié, 2021-04-18
Publication dans des actes
Résumé : | Nowadays, Deep Neural Networks (DNNs) report state-of-the-art results in many machine learning areas, including intrusion detection. Nevertheless, recent studies in computer vision have shown that DNNs can be vulnerable to adversarial attacks that are capable of deceiving them into misclassification by injecting specially crafted data. In security-critical areas, such attacks can cause serious damage; therefore, in this paper, we examine the effect of adversarial attacks on deep learning-based intrusion detection. In addition, we investigate the effectiveness of adversarial training as a defense against such attacks. Experimental results show that with sufficient distortion, adversarial examples are able to mislead the detector and that the use of adversarial training can improve the robustness of intrusion detection. |