Article révisé par les pairs
Résumé : Due to the lack of systematic empirical analyses and comparisons of ideas and methods, a clearly established state of the art is still missing in the optimization-based design of robot swarms. In this paper, we propose an experimental protocol for the comparison of fully-automatic design methods. This protocol is characterized by two notable elements: a way to define benchmarks for the evaluation and comparison of design methods, and a sampling strategy that minimizes the variance when estimating their expected performance. To define generally applicable benchmarks, we introduce the notion of mission generator: a tool to generate missions that mimic those a design method will eventually have to solve. To minimize the variance of the performance estimation, we show that, under some common assumptions, one should adopt the sampling strategy that maximizes the number of missions considered—a formal proof is provided as supplementary material. We illustrate the experimental protocol by comparing the performance of two off-line fully-automatic design methods that were presented in previous publications.