par Lunghi, Daniele 
Président du jury Bonatto, Daniele
Promoteur Bontempi, Gianluca
;Ioannidis, Yannis
Co-Promoteur Sacharidis, Dimitrios
;Simitsis, Alkis
Publication Non publié, 2025-10-22

Président du jury Bonatto, Daniele

Promoteur Bontempi, Gianluca
;Ioannidis, YannisCo-Promoteur Sacharidis, Dimitrios
;Simitsis, AlkisPublication Non publié, 2025-10-22
Thèse de doctorat
| Résumé : | Data-driven fraud detection utilizes machine learning to automatically detect fraudulent activities. Due to its scalability and strong generalization capabilities, it plays a crucial role in securing online payment systems.Traditional fraud detection approaches, however, do not account for the possibility that fraudsters may adapt their behavior to bypass the fraud detection system, which is a limitation they may exploit.The study of adaptive attackers and of the defenses to employ against them is called adversarial machine learning. In computer vision, adversarial attacks have proven to be highly effective, leading to an arms race between attackers and defenders. The proposed solutions, however, are generally application-specific, and their applicability to the domain of credit card fraud detection is dubious, with only a handful of works trying to apply adversarial techniques in this context.This thesis aims to bridge the gap between credit card fraud detection and adversarial machine learning through three main contributions:- A systematic assessment of adversarial attacks through the lens of their applicability to the context of credit card fraud detection. We discuss the impact of transaction aggregations, concept drift, and attackers' capabilities in securing fraud detection systems.We complement this analysis with an empirical assessment of some of the most common attacks in adversarial machine learning literature, showing how they struggle to adapt to the fraud detection domain due to a fundamental misalignment between the threat models of the domain and those for which the algorithms were designed. - The first quantitative model of fraudsters in the fraud detection literature. This analysis, backed by experiments performed on real industrial data, models fraudsters' actions as a function of the information they may have access to, such as the card they stole and the previous transactions performed with it.The resulting model, showing the limitations of fraudsters' knowledge about the cards they steal, led to the development of two novel oversampling algorithms: MIMO ADV-O and TimeGan ADV-O, capable of performing in line with other state-of-the-art resampling algorithms. The ADV-O framework became the backbone of this thesis model of fraudsters' capabilities and knowledge.- The design of a new adversarial attack for credit card fraud detection, called FRAUD-RLA. FRAUD-RLA employs reinforcement learning to generate fraudulent transactions efficiently. The algorithm takes into account the specificities of fraud detection to operate under limited knowledge and capabilities, thereby maximizing the collected reward over time.FRAUD-RLA is designed to provide an example of a family of RL-based adversarial attacks we should be aware of, and be an assessment tool for the security of fraud detection systems.\end{itemize}Our analysis reveals that credit card fraud detection exhibits several peculiarities that render traditional adversarial attacks ineffective due to the limited attackers' knowledge and other domain-specific constraints. However, we also show how it is possible to design domain-specific attacks using reinforcement learning as a tool.Overall, this thesis highlights the necessity of further studying the adversarial security of fraud detection engines and provides a framework for modeling, understanding, and assessing the threat. |



