Skip to main content
Log in

Convergence analysis of block majorize-minimize subspace approach

  • Original paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

We consider the minimization of a differentiable Lipschitz gradient but non necessarily convex, function F defined on \({\mathbb {R}}^N\). We propose an accelerated gradient descent approach which combines three strategies, namely (i) a variable metric derived from the majorization-minimization principle; (ii) a subspace strategy incorporating information from the past iterates; (iii) a block alternating update. Under the assumption that F satisfies the Kurdyka–Łojasiewicz property, we give conditions under which the sequence generated by the resulting block majorize-minimize subspace algorithm converges to a critical point of the objective function, and we exhibit convergence rates for its iterates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. https://www.mathworks.com/matlabcentral/fileexchange/50481-soot-l1-l2-norm-ratio-sparse-blind-deconvolution.

  2. https://people.clas.ufl.edu/hager/software/.

  3. https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html.

References

  1. Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. de l’inst. Fourier 48, 769–783 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bolte, J., Daniilidis, A., Ley, O., Mazet, L.: Characterizations of łojasiewicz inequalities and applications, arXiv preprint arXiv:0802.0826 (2008)

  3. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(1), 503–528 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  4. Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–154 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  5. Yuan, Y.-X.: Subspace methods for large scale nonlinear equations and nonlinear least squares. Optim. Eng. 10(2), 207–218 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Wald, A., Schuster, T.: Sequential subspace optimization for nonlinear inverse problems. J. Inverse Ill-posed Probl. 25(1), 99–117 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bonettini, S., Porta, F., Prato, M., Rebegoldi, S., Ruggiero, V., Zanni, L.: Recent Advances in Variable Metric First-Order Methods, pp. 1–31. Springer International Publishing, Cham (2019)

    Google Scholar 

  8. Frankel, P., Garrigos, G., Peypouquet, J.: Splitting methods with variable metric for Kurdyka–Łojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 165(3), 874–900 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1–2), 459–494 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Sun, Y., Babu, P., Palomar, D.P.: Majorization-minimization algorithms in signal processing, communications, and machine learning. IEEE Trans. Signal Process. 65(3), 794–816 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Zhang, Z., Kwok, J.T., Yeung, D.-Y.: Surrogate maximization/minimization algorithms and extensions. Mach. Learn. 69, 1–33 (2007)

    Article  MATH  Google Scholar 

  12. Robini, M.C., Zhu, Y.: Generic half-quadratic optimization for image reconstruction. SIAM J. Imaging Sci. 8(3), 1752–1797 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  13. Allain, M., Idier, J., Goussard, Y.: On global and local convergence of half-quadratic algorithms. IEEE Trans. Image Process. 15(5), 1130–1142 (2006)

    Article  Google Scholar 

  14. Chouzenoux, E., Pesquet, J.-C., Repetti, A.: Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162(1), 107–132 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chouzenoux, E., Pesquet, J.-C., Repetti, A.: A block coordinate variable metric forward-backward algorithm. J. Global Optim. 66(3), 457–485 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hong, M., Razaviyayn, M., Luo, Z.-Q., Pang, J.-S.: A unified algorithmic framework for block-structured optimization involving big data: with applications in machine learning and signal processing. IEEE Signal Process. Mag. 33(1), 57–77 (2015)

    Article  Google Scholar 

  17. Scutari, G., Sun, Y.: Parallel and Distributed Successive Convex Approximation Methods for Big-Data Optimization. Springer Verlag Series, Cetraro (2018)

    Book  MATH  Google Scholar 

  18. Jacobson, M.W., Fessler, J.A.: An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms. IEEE Trans. Image Process. 16(10), 2411–2422 (2007)

    Article  MathSciNet  Google Scholar 

  19. Sotthivirat, S., Fessler, J.A.: Image recovery using partitioned-separable paraboloidal surrogate coordinate ascent algorithms. IEEE Trans. Signal Process. 11(3), 306–317 (2002)

    Google Scholar 

  20. Chouzenoux, E., Idier, J., Moussaoui, S.: A majorize-minimize strategy for subspace optimization applied to image restoration. IEEE Trans. Image Process. 20(6), 1517–1528 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Chouzenoux, E., Jezierska, A., Pesquet, J.-C., Talbot, H.: A majorize-minimize subspace approach for \(\ell _2\)-\(\ell _0\) image regularization. SIAM J. Imaging Sci. 6(1), 563–591 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Chouzenoux, E., Pesquet, J.-C.: Convergence rate analysis of the majorize-minimize subspace algorithm. IEEE Signal Process. Lett. 23(9), 1284–1288 (2016)

    Article  Google Scholar 

  23. Chouzenoux, E., Martin, S., Pesquet, J.-C.: A local MM subspace method for solving constrained variational problems in image recovery. J. Math. Imaging Vis.. 65(2), 253–276 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  24. Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116(1), 5–16 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Razaviyayn, M., Hong, M., Luo, Z.-Q.: A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM J. Optim. 23(2), 1126–1153 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  26. Bertsekas, D.P.: Nonlinear programming. J. Oper. Res. Soc. 48(3), 334–334 (1997)

    Article  Google Scholar 

  27. Miele, A., Cantrell, J.: Study on a memory gradient method for the minimization of functions. J. Optim. Theory Appl. 3(6), 459–470 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  28. Florescu, A., Chouzenoux, E., Pesquet, J.-C., Ciuciu, P., Ciochina, S.: A majorize-minimize memory gradient method for complex-valued inverse problems. Signal Process. 103, 285–295 (2014)

    Article  Google Scholar 

  29. Cantrell, J.W.: Relation between the memory gradient method and the Fletcher–Reeves method. J. Optim. Theory Appl. 4(1), 67–71 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  30. Boţ, R.I., Csetnek, E.R.: An inertial Tseng’s type proximal algorithm for nonsmooth and nonconvex optimization problems. J. Optim. Theory Appl. 171(2), 600–616 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  31. Davis, D.: The asynchronous PALM algorithm for nonsmooth nonconvex problems, arXiv preprint arXiv:1604.00526 (2016)

  32. Nocedal, J., Wright, S.: Numerical Optimization. Springer Science & Business Media, Cham (2006)

    MATH  Google Scholar 

  33. Haykin, S.: Blind Deconvolution, 1994

  34. Repetti, A., Pham, M., Duval, L., Chouzenoux, E., Pesquet, J.-C.: Euclid in a Taxicab: sparse blind deconvolution with smoothed l1/l2 regularization. IEEE Signal Process. Lett. 22(5), 539–543 (2015)

    Article  Google Scholar 

  35. Cherni, A., Chouzenoux, E., Duval, L., Pesquet, J.-C.: SPOQ lp-over-lq regularization for sparse signal recovery applied to mass spectrometry. IEEE Trans. Signal Process. 68, 6070–6084 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  36. Zheng, P., Chouzenoux, E., Duval, L.: PENDANTSS:penalized norm-ratios disentangling additive noise, trend and sparse spikes, Tech. rep., arXiv:2301.01514 (2023)

  37. Bauschke, H., Combettes, P.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York (2017)

    Book  MATH  Google Scholar 

  38. Cadoni, S., Chouzenoux, E., Pesquet, J.-C., Chaux, C.: A block parallel majorize-minimize memory gradient algorithm, In: 23rd IEEE Int. Conf. Image Process. (ICIP 2016), Phoenix, AZ, 2016, pp. 3194–3198

  39. Hager, W.H., Zhang, H.: Algorithm 851: CG DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  40. Schmidt, M.: minFunc: unconstrained differentiable multivariate optimization in Matlab, Tech. rep. (2005)

Download references

Acknowledgements

J.-B. Fest and E. Chouzenoux are with the laboratoire CVN, CentraleSupélec, Inria, Université Paris-Saclay, 9 rue Joliot Curie, 91190 Gif-sur-Yvette, France. Email: first.last@centralesupelec.fr. This work is funded by the European Research Council Starting Grant MAJORIS ERC-2019-STG-850925.

Funding

This research work received funding support from the European Research Council Starting Grant MAJORIS ERC-2019-STG-850925.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean-Baptiste Fest.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chouzenoux, E., Fest, JB. Convergence analysis of block majorize-minimize subspace approach. Optim Lett (2023). https://doi.org/10.1007/s11590-023-02055-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11590-023-02055-z

Keywords

Navigation