par Loris, Ignace
Référence Optimization Techniques for Inverse Problems workshop (4: 06-07/09/2021: Online (was: Modena, Italy))
Publication Non publié, 2021-09-06
Communication à un colloque
Résumé : Non-euclidean versions of some primal-dual iterative optimization algorithms are presented. In these algorithms the proximal operator is based on Bregman-divergences instead of euclidean distances.Double loop iterations are also proposed which can be used for the minimization of a convex cost function consisting of a sum of several parts: a differentiable part, a proximable part and the composition of a linear map with a proximable function. While the number of inner iterations is fixed in advance in these algorithms, convergence is guaranteed by virtue of an inner loop warm-start strategy, showing that inner loop ``starting rules" can be just as effective as ``stopping rules'' for guaranteeing convergence. The algorithms are applicable to the numerical solution of convex optimization problems encountered in inverse problems, imaging and statistics and reduce to the classical proximal gradient algorithm in certain special cases and also generalize other existing algorithms.