Skip to main content

Advertisement

Log in

A reduced Jacobian method with full convergence property

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe (J Optim Theory Appl 179:917–943, 2018) for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant possesses the full convergence property in the sense that the entire sequence converges whenever the objectives are quasiconvex. Simulations are reported showing the performance of this variant compared to RJM and the nondominated sorting genetic algorithm (NSGA-II).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. These MOPs were selected from the literature, so that the reader can compare our method with others, e.g., the deterministic SQP-like methods experimented in [2, Tables 2–3] (see [1] about dimensions “m”).

  2. Here, the metric \(m_{p,s}\) may be 1/P, 1/HV, GD or CPU, so that all these metrics have the same asymptotic behaviour in a sense that the smaller the measure, the better the solver.

References

  1. Ansary, M.A.T., Panda, G.: A sequential quadratic programming method for constrained multi-objective optimization problems. J. Appl. Math. Comput. 64, 379–397 (2020)

    Article  MathSciNet  Google Scholar 

  2. Ansary, M.A.T., Panda, G.: A sequential quadratically constrained quadratic programming technique for a multi-objective optimization problem. Eng. Optim. 51, 22–41 (2019)

    Article  MathSciNet  Google Scholar 

  3. Bandyopadhyay, S., Pal, S.K., Aruna, B.: Multiobjective GAs, quantitative indices, and pattern classification. IEEE Trans . Syst. Man Cyber. B Cyber. 34, 2088–2099 (2004)

    Article  Google Scholar 

  4. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. Wiley, New York (2006)

    Book  Google Scholar 

  5. Bello-Cruz, J.Y., Lucambio Pérez, L.R., Melo, J.G.: Convergence of the projected gradient method for quasiconvex multiobjective optimization. Nonlin. Anal. Theor. Methods Appl. 74, 5268–5273 (2011)

    Article  MathSciNet  Google Scholar 

  6. Benoist, J., Borwein, V., Popovici, N.: A characterization of quasiconvex vector-valued functions. Proc. Am. Math. Soc. 131, 1109–1113 (2003)

    Article  MathSciNet  Google Scholar 

  7. Bento, G.C., Cruz Neto, J.X., Oliveira, P.R., Soubeyran, A.: The self regulation problem as an inexact steepest descent method for multicriteria optimization. Eur. J. Oper. Res. 235, 494–502 (2014)

    Article  MathSciNet  Google Scholar 

  8. Burachik, R., Drummond, L.M.G., Iusem, A.N., Svaiter, B.F.: Full convergence of the steepest descent method with inexact line searches. Optimization 32, 137–146 (1995)

    Article  MathSciNet  Google Scholar 

  9. Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making: Theory and Methodology. North-Holland, New York (1983)

    Google Scholar 

  10. Cocchi, G., Lapucci, M.: An augmented Lagrangian algorithm for multi-objective optimization. Comput. Optim. Appl. 77, 29–56 (2020)

    Article  MathSciNet  Google Scholar 

  11. Collette, Y., Siarry, P.: Optimisation Multiobjective. Eyrolles, Paris (2002)

    Google Scholar 

  12. Das, S.K., Goswami, A., Alam, S.S.: Multiobjective transportation problem with interval cost, source and destination parameters. Euro. J. Oper. Res. 117, 100–112 (1999)

    Article  Google Scholar 

  13. Deb, K.: Multi-objective Optimization Using Evolutionary Algorithms. Wiley, Chichester (2001)

    Google Scholar 

  14. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197 (2002)

    Article  Google Scholar 

  15. Drummond, L.M.G., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28, 5–29 (2004)

    Article  MathSciNet  Google Scholar 

  16. Drummond, L.M.G., Svaiter, B.F.: A steepest descent method for vector optimization. J. Comput. Appl. Math. 175, 395–414 (2005)

    Article  MathSciNet  Google Scholar 

  17. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)

    Google Scholar 

  18. El Maghri, M., Elboulqe, Y.: Correction to: Reduced Jacobian method. J. Optim. Theory Appl. 187, 304–304 (2020)

    Article  Google Scholar 

  19. El Maghri, M., Elboulqe, Y.: Reduced Jacobian method. J. Optim. Theory Appl. 179, 917–943 (2018)

    Article  MathSciNet  Google Scholar 

  20. Fliege, J., Vaz, A.I.F.: A method for constrained multiobjective optimization based on SQP techniques. SIAM J. Optim. 26, 2091–2119 (2016)

    Article  MathSciNet  Google Scholar 

  21. Fliege, J., Drummond, L.M.G., Svaiter, B.F.: Newton’s method for multiobjective optimisation. SIAM J. Optim. 20, 602–626 (2009)

    Article  MathSciNet  Google Scholar 

  22. Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51, 479–494 (2000)

    Article  MathSciNet  Google Scholar 

  23. Fukuda, E.H., Drummond, L.M.G.: On the convergence of the projected gradient method for vector optimization. Optimization 60, 1009–1021 (2011)

    Article  MathSciNet  Google Scholar 

  24. García-Palomares, U.M., Burguillo-Rial, J.C., González-Castaño, F.J.: Explicit gradient information in multiobjective optimization. Oper. Res. Lett. 36, 722–725 (2008)

    Article  MathSciNet  Google Scholar 

  25. Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10, 477–506 (2006)

    Article  Google Scholar 

  26. Iusem, A.N., Melo, J.G., Serra, R.G.: A strongly convergent proximal point method for vector optimization. J. Optim. Theor. Appl. 190, 183–200 (2021)

    Article  MathSciNet  Google Scholar 

  27. Luenberger, D.G., Ye, Y.: Linear and Nonlinear Programming, 3rd edn. Springer, New York (2008)

    Book  Google Scholar 

  28. Mao, J., Hirasawa, K., Hu, J., Murata, J.: Genetic symbiosis algorithm for multiobjective optimization problem. In: Proceedings of 9th IEEE International Workshop in Robot and Human Interactive Communications (2000). https://doi.org/10.1109/ROMAN.2000.892484

  29. Mokhtar-Kharroubi, H.: Sur la convergence théorique de la méthode du gradient réduit géneralisé. Numer. Math. 34, 73–85 (1980)

    Article  MathSciNet  Google Scholar 

  30. Morovati, V., Pourkarimi, L.: Extension of Zoutendijk method for solving constrained multiobjective optimization problems. Eur. J. Oper. Res. 273, 44–57 (2019)

    Article  MathSciNet  Google Scholar 

  31. Mukai, H.: Algorithms for multicriterion optimization. IEEE Trans. Auto Control 2, 177–186 (1980)

    Article  MathSciNet  Google Scholar 

  32. Oliveira, S.L.C., Ferreira, P.A.V.: Bi-objective optimization with multiple decision makers: a convex approach to attain majority solutions. J. Oper. Res. Soc. 51, 333–340 (2000)

    Article  Google Scholar 

  33. Radhakrishnan, A.: Evolutionary algorithms for multiobjective optimization with applications in portfolio optimization. Master’s thesis, North Carolina State University (2007)

  34. Smeers, Y.: Generalized reduced gradient method as an extension of feasible direction methods. J. Optim. Theor. Appl. 22, 209–226 (1977)

    Article  MathSciNet  Google Scholar 

  35. Sun, X., Teo, K.L., Long, X.J.: Characterizations of robust \(\varepsilon\)-quasi optimal solutions for nonsmooth optimization problems with uncertain data. Optimization 70, 847–870 (2021)

    Article  MathSciNet  Google Scholar 

  36. Sun, X., Teo, K.L., Long, X.J.: Some characterizations of approximate solutions for robust semi-infinite optimization problems. J. Optim. Theor. Appl. 191, 281–310 (2021)

    Article  MathSciNet  Google Scholar 

  37. Sun, X., Teo, K.L., Zeng, J., Liu, L.: Robust approximate optimal solutions for nonlinear semi-infinite programming with uncertainty. Optimization 69, 2109–2129 (2020)

    Article  MathSciNet  Google Scholar 

  38. Sun, X., Tang, L., Zeng, J.: Characterizations of approximate duality and saddle point theorems for nonsmooth robust vector optimization. Numer. Funct. Anal. Optim. 41, 462–482 (2020)

    Article  MathSciNet  Google Scholar 

  39. Thang, T.N., Luc, D.T., Kim, N.T.B.: Solving generalized convex multiobjective programming problems by a normal direction method. Optimization 65, 2269–2292 (2016)

    Article  MathSciNet  Google Scholar 

  40. Van Veldhuizen, D.A., Lamont, G.B.: On measuring multiobjective evolutionary algorithm performance. Proc. IEEE Cong. Evol. Comput. (2000). https://doi.org/10.1109/CEC.2000.870296

    Article  Google Scholar 

  41. Wang, C.Y.: On convergence property of an improved reduced gradient method. Kexue Tongbao 28, 577–582 (1983)

    MathSciNet  Google Scholar 

  42. Wang, C.Y.: Simplification and convergence characteristics of a new pivot method and Levitin–Polyak gradient projection method. Act. Math. Appl. Sin. 4, 37–52 (1981)

    Google Scholar 

  43. Wolfe, P.: Methods of nonlinear programming. In: Graves, R.L., Wolfe, P. (eds.) Recent Advances in Mathematical Programming, pp. 67–86. McGraw-Hill, New York (1963)

    Google Scholar 

  44. Xidonas, P., Mavrotas, G., Hassapis, C., Zopounidis, C.: Robust multiobjective portfolio optimization: a minimax regret approach. Eur. J. Oper. Res. 262, 299–305 (2017)

    Article  MathSciNet  Google Scholar 

  45. Yue, M., Han, J.: A new reduced gradient method. Sci. Sin. 22, 1099–1113 (1979)

    MathSciNet  Google Scholar 

  46. Zitzler, E., Knowles, J., Thiele, L.: Quality assessment of Pareto set approximations. In: Branke, J., Deb, K., Miettinen, K., Słowiński, R. (eds.) Multiobjective Optimization. Lecture Notes in Computer Science, vol. 5252, pp. 373–404. Springer (2008)

    Chapter  Google Scholar 

  47. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8, 173–195 (2000)

    Article  Google Scholar 

  48. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3, 257–271 (1999)

    Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the anonymous referees for their considerable comments and suggestions including pertinent remarks and very interesting questions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. El Maghri.

Ethics declarations

Conflict of interest

The authors have no pertinent declarations concerning conflicts of interest, financial or non-financial interests, competing interests or other statements to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

El Maghri, M., Elboulqe, Y. A reduced Jacobian method with full convergence property. Optim Lett (2024). https://doi.org/10.1007/s11590-023-02083-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11590-023-02083-9

Keywords

Navigation