Résumé : This thesis presents my research on fast neural architecture search (NAS) algorithms, and specifically on algorithms for dynamically growing and pruning artificial neural networks (NN) while they are trained. NAS, as a problem and as a research field, has emerged out of researchers' concern for developing accurate and efficient neural architectures for target tasks, while also avoiding undesirable and even harmful characteristics in these architectures, such as a very high parameter count or a superfluous time and energy cost for using them. The NAS problem is in fact bilevel: in order to optimize a NN, one must optimize both its architecture and its learnable parameters (weights, biases and the like)-and the relative importance of these two levels is still the subject of much debate. For this reason, the most "extreme" fast NAS approaches can be classified into two paradigms, "train one net" or "train zero nets", which respectively address the two optimization levels of NAS simultaneously or sequentially. This thesis provides an extensive survey of the state of the art for these two paradigms, but its main research work is only concerned with the first paradigm: "train one net". The thesis' main contribution is DensEMANN, an algorithm for quickly growing and training minimal yet optimal DenseNet architectures for target tasks. Inspired on a 1990's method for growing early NNs (EMANN), the algorithm is based on a "selfstructuring" or "in-supervised" approach that, through an introspection of the network's learned weights, determines when to add and/or remove components in it. As a result, it can generate architectures that, while small in size, are competitive at their target task: within half a GPU day, its latest version can generate NNs with less than 0.5 million learnable parameters, and 93% to 95% accuracy on image classification benchmarks such as CIFAR-10, SVHN and Fashion-MNIST. The thesis is built as a succession of research works and publications, highlighting various key stages in DensEMANN's development. One of these works also covers a parallel research line, in which techniques for simultaneously pruning and training NNs are compared to and hybridised with DensEMANN. In conclusion, this research work highlights the potential of minimal NN architectures for competitiveness (at least in a Pareto-optimal sense), the importance of a NN's architecture for its performance and ability to learn target tasks, and the important role played by "macro" architecture patterns (the connection schemes for layers or groups of layers) relative to "micro" patterns (the configuration of individual neurons in each layer) to foment this performance.