Abstract
In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe (J Optim Theory Appl 179:917–943, 2018) for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant possesses the full convergence property in the sense that the entire sequence converges whenever the objectives are quasiconvex. Simulations are reported showing the performance of this variant compared to RJM and the nondominated sorting genetic algorithm (NSGA-II).
Similar content being viewed by others
Notes
Here, the metric \(m_{p,s}\) may be 1/P, 1/HV, GD or CPU, so that all these metrics have the same asymptotic behaviour in a sense that the smaller the measure, the better the solver.
References
Ansary, M.A.T., Panda, G.: A sequential quadratic programming method for constrained multi-objective optimization problems. J. Appl. Math. Comput. 64, 379–397 (2020)
Ansary, M.A.T., Panda, G.: A sequential quadratically constrained quadratic programming technique for a multi-objective optimization problem. Eng. Optim. 51, 22–41 (2019)
Bandyopadhyay, S., Pal, S.K., Aruna, B.: Multiobjective GAs, quantitative indices, and pattern classification. IEEE Trans . Syst. Man Cyber. B Cyber. 34, 2088–2099 (2004)
Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. Wiley, New York (2006)
Bello-Cruz, J.Y., Lucambio Pérez, L.R., Melo, J.G.: Convergence of the projected gradient method for quasiconvex multiobjective optimization. Nonlin. Anal. Theor. Methods Appl. 74, 5268–5273 (2011)
Benoist, J., Borwein, V., Popovici, N.: A characterization of quasiconvex vector-valued functions. Proc. Am. Math. Soc. 131, 1109–1113 (2003)
Bento, G.C., Cruz Neto, J.X., Oliveira, P.R., Soubeyran, A.: The self regulation problem as an inexact steepest descent method for multicriteria optimization. Eur. J. Oper. Res. 235, 494–502 (2014)
Burachik, R., Drummond, L.M.G., Iusem, A.N., Svaiter, B.F.: Full convergence of the steepest descent method with inexact line searches. Optimization 32, 137–146 (1995)
Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making: Theory and Methodology. North-Holland, New York (1983)
Cocchi, G., Lapucci, M.: An augmented Lagrangian algorithm for multi-objective optimization. Comput. Optim. Appl. 77, 29–56 (2020)
Collette, Y., Siarry, P.: Optimisation Multiobjective. Eyrolles, Paris (2002)
Das, S.K., Goswami, A., Alam, S.S.: Multiobjective transportation problem with interval cost, source and destination parameters. Euro. J. Oper. Res. 117, 100–112 (1999)
Deb, K.: Multi-objective Optimization Using Evolutionary Algorithms. Wiley, Chichester (2001)
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197 (2002)
Drummond, L.M.G., Iusem, A.N.: A projected gradient method for vector optimization problems. Comput. Optim. Appl. 28, 5–29 (2004)
Drummond, L.M.G., Svaiter, B.F.: A steepest descent method for vector optimization. J. Comput. Appl. Math. 175, 395–414 (2005)
Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)
El Maghri, M., Elboulqe, Y.: Correction to: Reduced Jacobian method. J. Optim. Theory Appl. 187, 304–304 (2020)
El Maghri, M., Elboulqe, Y.: Reduced Jacobian method. J. Optim. Theory Appl. 179, 917–943 (2018)
Fliege, J., Vaz, A.I.F.: A method for constrained multiobjective optimization based on SQP techniques. SIAM J. Optim. 26, 2091–2119 (2016)
Fliege, J., Drummond, L.M.G., Svaiter, B.F.: Newton’s method for multiobjective optimisation. SIAM J. Optim. 20, 602–626 (2009)
Fliege, J., Svaiter, B.F.: Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 51, 479–494 (2000)
Fukuda, E.H., Drummond, L.M.G.: On the convergence of the projected gradient method for vector optimization. Optimization 60, 1009–1021 (2011)
García-Palomares, U.M., Burguillo-Rial, J.C., González-Castaño, F.J.: Explicit gradient information in multiobjective optimization. Oper. Res. Lett. 36, 722–725 (2008)
Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10, 477–506 (2006)
Iusem, A.N., Melo, J.G., Serra, R.G.: A strongly convergent proximal point method for vector optimization. J. Optim. Theor. Appl. 190, 183–200 (2021)
Luenberger, D.G., Ye, Y.: Linear and Nonlinear Programming, 3rd edn. Springer, New York (2008)
Mao, J., Hirasawa, K., Hu, J., Murata, J.: Genetic symbiosis algorithm for multiobjective optimization problem. In: Proceedings of 9th IEEE International Workshop in Robot and Human Interactive Communications (2000). https://doi.org/10.1109/ROMAN.2000.892484
Mokhtar-Kharroubi, H.: Sur la convergence théorique de la méthode du gradient réduit géneralisé. Numer. Math. 34, 73–85 (1980)
Morovati, V., Pourkarimi, L.: Extension of Zoutendijk method for solving constrained multiobjective optimization problems. Eur. J. Oper. Res. 273, 44–57 (2019)
Mukai, H.: Algorithms for multicriterion optimization. IEEE Trans. Auto Control 2, 177–186 (1980)
Oliveira, S.L.C., Ferreira, P.A.V.: Bi-objective optimization with multiple decision makers: a convex approach to attain majority solutions. J. Oper. Res. Soc. 51, 333–340 (2000)
Radhakrishnan, A.: Evolutionary algorithms for multiobjective optimization with applications in portfolio optimization. Master’s thesis, North Carolina State University (2007)
Smeers, Y.: Generalized reduced gradient method as an extension of feasible direction methods. J. Optim. Theor. Appl. 22, 209–226 (1977)
Sun, X., Teo, K.L., Long, X.J.: Characterizations of robust \(\varepsilon\)-quasi optimal solutions for nonsmooth optimization problems with uncertain data. Optimization 70, 847–870 (2021)
Sun, X., Teo, K.L., Long, X.J.: Some characterizations of approximate solutions for robust semi-infinite optimization problems. J. Optim. Theor. Appl. 191, 281–310 (2021)
Sun, X., Teo, K.L., Zeng, J., Liu, L.: Robust approximate optimal solutions for nonlinear semi-infinite programming with uncertainty. Optimization 69, 2109–2129 (2020)
Sun, X., Tang, L., Zeng, J.: Characterizations of approximate duality and saddle point theorems for nonsmooth robust vector optimization. Numer. Funct. Anal. Optim. 41, 462–482 (2020)
Thang, T.N., Luc, D.T., Kim, N.T.B.: Solving generalized convex multiobjective programming problems by a normal direction method. Optimization 65, 2269–2292 (2016)
Van Veldhuizen, D.A., Lamont, G.B.: On measuring multiobjective evolutionary algorithm performance. Proc. IEEE Cong. Evol. Comput. (2000). https://doi.org/10.1109/CEC.2000.870296
Wang, C.Y.: On convergence property of an improved reduced gradient method. Kexue Tongbao 28, 577–582 (1983)
Wang, C.Y.: Simplification and convergence characteristics of a new pivot method and Levitin–Polyak gradient projection method. Act. Math. Appl. Sin. 4, 37–52 (1981)
Wolfe, P.: Methods of nonlinear programming. In: Graves, R.L., Wolfe, P. (eds.) Recent Advances in Mathematical Programming, pp. 67–86. McGraw-Hill, New York (1963)
Xidonas, P., Mavrotas, G., Hassapis, C., Zopounidis, C.: Robust multiobjective portfolio optimization: a minimax regret approach. Eur. J. Oper. Res. 262, 299–305 (2017)
Yue, M., Han, J.: A new reduced gradient method. Sci. Sin. 22, 1099–1113 (1979)
Zitzler, E., Knowles, J., Thiele, L.: Quality assessment of Pareto set approximations. In: Branke, J., Deb, K., Miettinen, K., Słowiński, R. (eds.) Multiobjective Optimization. Lecture Notes in Computer Science, vol. 5252, pp. 373–404. Springer (2008)
Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8, 173–195 (2000)
Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3, 257–271 (1999)
Acknowledgements
The authors are grateful to the anonymous referees for their considerable comments and suggestions including pertinent remarks and very interesting questions.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no pertinent declarations concerning conflicts of interest, financial or non-financial interests, competing interests or other statements to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
El Maghri, M., Elboulqe, Y. A reduced Jacobian method with full convergence property. Optim Lett (2024). https://doi.org/10.1007/s11590-023-02083-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11590-023-02083-9