Résumé : Inferring gene regulatory networks from expression profiles is a challenging problem that has been tackled using many different approaches. When posed as an optimization problem, the typical goal is to minimize the value of an error measure, such as the relative squared error, between the real profiles and those generated with a model whose parameters are to be optimized. In this paper, we use dynamic recurrent neural networks to model regulatory interactions and study systematically the "fitness landscape" that results from measuring the relative squared error. Although the results of the study indicate that the generated landscapes have a positive fitness-distance correlation, the error values span several orders of magnitude over very short distance variations. This suggests that the fitness landscape has extremely deep valleys, which can make general-purpose state-of-the-art continuous optimization algorithms exhibit a very poor performance. Further results, obtained from an analysis based on perturbations of the optimal network topology, support approaches in which the spaces of network topologies and of network parameters are decoupled. © 2010 Springer-Verlag.