Article révisé par les pairs
Résumé : The development of algorithms for tackling continuous optimization problems has been one of the most active research topics in soft computing in the last decades. It led to many high performing algorithms from areas such as evolutionary computation or swarm intelligence. These developments have been sidelined by an increasing effort of benchmarking algorithms using various benchmarking sets proposed by different researchers. In this article, we explore the interaction between benchmark sets, algorithm tuning, and algorithm performance. To do so, we compare the performance of seven proven high-performing continuous optimizers on two different benchmark sets: the functions of the special session on real-parameter optimization from the IEEE 2005 Congress on Evolutionary Computation and the functions used for a recent special issue of the Soft Computing journal on large-scale optimization. While one conclusion of our experiments is that automatic algorithm tuning improves the performance of the tested continuous optimizers, our main conclusion is that the choice of the benchmark set has a much larger impact on the ranking of the compared optimizers. This latter conclusion is true whether one uses default or tuned parameter settings.