par Knowles, Joshua ;Corne, David D.W.
Référence IEEE transactions on evolutionary computation, 7, 2, page (100-116)
Publication Publié, 2003-04
Article révisé par les pairs
Résumé : Search algorithms for Pareto optimization are designed to obtain multiple solutions, each offering a different tradeoff of the problem objectives. To make the different solutions available at the end of an algorithm run, procedures are needed for storing them, one by one, as they are found. In a simple case, this may be achieved by placing each point (the image of a solution in objective space) that is found into an "archive" which maintains only nondominated points and discards all others. However, even a set of mutually nondominated points is potentially very large (infinite in continuous objective spaces), necessitating a bound on the archive's capacity. But with such a bound in place, it is no longer obvious which points should be maintained and which discarded; we would like the archive to maintain a representative and well-distributed subset of the points generated by the search algorithm, and also that this set converges. To achieve these objectives, we propose an adaptive archiving algorithm, suitable for use with any Pareto optimization algorithm, which has various useful properties as follows. It maintains an archive of bounded size, encourages an even distribution of points across the Pareto front, is computationally efficient, and we are able (with caveats) to prove a form of convergence. Previously proposed archiving algorithms, which we also discuss, have more general convergence properties, but at the expense of not being able to maintain an even distribution of points along the front, or are very computationally expensive, or do not guarantee to maintain a certain minimum number of points in the archive. In contrast, the method proposed here maintains evenness, efficiency, and cardinality, and provably converges under certain conditions (e.g., when there are two objectives) but not all. Finally, the notions underlying our convergence proofs support a new way to rigorously define what is meant by "good spread of points" across a Pareto front, in the context of grid-based archiving schemes. This leads to proofs and conjectures applicable to archive sizing and grid sizing in any Pareto optimization algorithm maintaining a grid-based archive.