Article révisé par les pairs
Résumé : Automating the process of finding good parameter settings is important in the design of high-performing algorithms. These automatic processes can generally be categorized into off-line and on-line methods. Off-line configuration consists in learning and selecting the best setting in a training phase, and usually fixes it while solving an instance. On-line adaptation methods on the contrary vary the parameter setting adaptively during each algorithm run. In this work, we provide an empirical study of both approaches on the operator selection problem, explore the possibility of varying parameter value by a non-adaptive distribution tuned off-line, and incorporate the off-line with on-line approaches. In particular, using an off-line tuned distribution to vary parameter values at runtime appears to be a promising idea for automatic configuration. © 2014 Springer International Publishing.