Résumé : Effective Machine Learning (ML) requires more than just accurate models; it also demands consideration of factors such as model complexity, fairness, and other task-specific requirements. Fulfilling these requirements begins at the data level by selecting features that con-tribute to addressing these concerns. This can benefit from a many-objective optimization approach to Feature Selection (FS).This thesis, therefore, studies Many-Objective Feature Selection (MOFS) and contributes to the development of efficient and responsible ML solutions. However, due to the large number of MOFS solutions, it comes with an interpretability challenge. Therefore, we alsoaim to propose a methodology for tackling this limitation of MOFS.Although FS has been long researched, previous work (on both filter and wrapper methods) has failed to address this gap by focusing only on one or at most two objectives. Also, for the interpretability of FS results, no methodological approach has been proposed, andrather, a basic tabular representation has been used.We propose a framework that uses non-dominated sorting genetic algorithms to balance important and often conflicting objectives for FS. In particular, more than four to fifteen objectives could be considered with this method. For interpretability, our proposed methodology—Comprehensive Many-Objective Preference Analysis and Solution Selection (COMPASS) —consists of six steps that consider three viewpoints: objectives, solutions, and variables (i.e., features).To achieve the research goal, we follow a structured approach: first, an extensive literature review that establishes the state-of-the-art and identifies open challenges. Next, empirical analyses of single-objective filter and wrapper methods, as well as multi-objective wrappermethods, are conducted to assess their strengths and limitations. Our MOFS framework is then proposed and evaluated through multiple experiments, including its application to fairness in ML. Finally, the interpretability methodology is instantiated as an interactive dashboard, which is validated through an experimental study involving 50 participants, with statistical analysis to assess its effectiveness.The findings highlight that no single FS method is universally optimal; instead, the best approach depends on dataset characteristics, task requirements, and objectives. While filter methods are computationally efficient and wrapper methods enhance model performance in single-objective settings, the proposed MOFS framework successfully balances multiple conflicting indicators related to performance, complexity, and fairness. Moreover, the interpretability methodology proved essential in helping data scientists to better understand MOFS results, enabling informed decision-making in FS.