We consider the problem of minimizing a smooth nonconvex function over a structured convex feasible set, that is, defined by two sets of constraints that are easy to treat when considered separately. In order to exploit the structure of the problem, we define an equivalent formulation by duplicating the variables and we consider the augmented Lagrangian of this latter formulation. Following the idea of the Alternating Direction Method of Multipliers (ADMM), we propose an algorithm where a two-blocks decomposition method is embedded within an augmented Lagrangian framework. The peculiarities of the proposed algorithm are the following: (1) the computation of the exact solution of a possibly nonconvex subproblem is not required; (2) the penalty parameter is iteratively updated once an approximated stationary point of the augmented Lagrangian is determined. Global convergence results are stated under mild assumptions and without requiring convexity of the objective function. Although the primary aim of the paper is theoretical, we perform numerical experiments on a nonconvex problem arising in machine learning, and the obtained results show the practical advantages of the proposed approach with respect to classical ADMM.

An Alternating Augmented Lagrangian method for constrained nonconvex optimization

Sciandrone M
2020

Abstract

We consider the problem of minimizing a smooth nonconvex function over a structured convex feasible set, that is, defined by two sets of constraints that are easy to treat when considered separately. In order to exploit the structure of the problem, we define an equivalent formulation by duplicating the variables and we consider the augmented Lagrangian of this latter formulation. Following the idea of the Alternating Direction Method of Multipliers (ADMM), we propose an algorithm where a two-blocks decomposition method is embedded within an augmented Lagrangian framework. The peculiarities of the proposed algorithm are the following: (1) the computation of the exact solution of a possibly nonconvex subproblem is not required; (2) the penalty parameter is iteratively updated once an approximated stationary point of the augmented Lagrangian is determined. Global convergence results are stated under mild assumptions and without requiring convexity of the objective function. Although the primary aim of the paper is theoretical, we perform numerical experiments on a nonconvex problem arising in machine learning, and the obtained results show the practical advantages of the proposed approach with respect to classical ADMM.
2020
Istituto di Analisi dei Sistemi ed Informatica ''Antonio Ruberti'' - IASI
[object Object
[object Object
[object Object
[object Object
[object Object
[object Object
[object Object
[object Object
[object Object
[object Object
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/377898
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact