Résumé : Nowadays, numerous applications incorporate machine learning algorithms due to their prominent achievements. However, many studies in the field of computer vision have shown that machine learning can be fooled by intentionally crafted instances, called adversarial examples. These adversarial examples take advantage of the intrinsic vulnerability of machine learning models. This vulnerability raises many concerns in the cybersecurity field since an increasing number of security systems are powered by machine-learning algorithms. In this thesis, we explored the effects of adversarial machine learning on cybersecurity systems driven by machine learning models, focusing on intrusion detection systems. To do so, we implement and evaluate evasion attacks in both black-box and white-box settings to generate adversarial network traffic able to fool the intrusion detection system. We also design and test novel evasion attacks and adversarial defenses to improve the robustness of intrusion detection systems. The experimental results demonstrated that machine learning-based intrusion detection systems are vulnerable to adversarial attacks generated by adding minor specially crafted perturbations to malicious network traffic, allowing the attacker to evade detection and thus successfully perform his initial attacks. Adversarial detection, on the other hand, provides an efficient way to mitigate the effect of adversarial attacks at the expense of increasing model complexity by adding a second line of defense.