Skip to main content
Log in

Relaxations and duality for multiobjective integer programming

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

Multiobjective integer programs (MOIPs) simultaneously optimize multiple objective functions over a set of linear constraints and integer variables. In this paper, we present continuous, convex hull and Lagrangian relaxations for MOIPs and examine the relationship among them. The convex hull relaxation is tight at supported solutions, i.e., those that can be derived via a weighted-sum scalarization of the MOIP. At unsupported solutions, the convex hull relaxation is not tight and a Lagrangian relaxation may provide a tighter bound. Using the Lagrangian relaxation, we define a Lagrangian dual of an MOIP that satisfies weak duality and is strong at supported solutions under certain conditions on the primal feasible region. We include a numerical experiment to illustrate that bound sets obtained via Lagrangian duality may yield tighter bounds than those from a convex hull relaxation. Subsequently, we generalize the integer programming value function to MOIPs and use its properties to motivate a set-valued superadditive dual that is strong at supported solutions. We also define a simpler vector-valued superadditive dual that exhibits weak duality but is strongly dual if and only if the primal has a unique nondominated point.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Because we consider multiple optimization problems in this section, we use \({\mathcal {X}}_{\textrm{P}}\) and \({\mathcal {Y}}_{\textrm{P}}\) to denote the feasible region and set of feasible objective values for problem (P), respectively.

  2. [10] presents a minimization problem, so the roles of upper and lower bound sets are reversed and accordingly adapted here.

References

  1. Aneja, Y.P., Nair, K.P.K.: Bicriteria transportation problem. Manag. Sci. 25(1), 73–78 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  2. Benson, H.P.: Multi-objective optimization: Pareto optimal solutions, properties. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, pp. 2478–2481. Springer, Boston (2009)

  3. Boland, N., Charkhgard, H., Savelsbergh, M.: A new method for optimizing a linear function over the efficient set of a multiobjective integer program. Eur. J. Oper. Res. 260(3), 904–919 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cerqueus, A., Przybylski, A., Gandibleux, X.: Surrogate upper bound sets for bi-objective bi-dimensional binary knapsack problems. Eur. J. Oper. Res. 244(2), 417–433 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Corley, H.: Duality theory for the matrix linear programming problem. J. Math. Anal. Appl. 104(1), 47–52 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dächert, K., Klamroth, K., Lacour, R., Vanderpooten, D.: Efficient computation of the search region in multi-objective optimization. Eur. J. Oper. Res. 260(3), 841–855 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)

    MATH  Google Scholar 

  8. Ehrgott, M.: A discussion of scalarization techniques for multiple objective integer programming. Ann. Oper. Res. 147(1), 343–360 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Ehrgott, M., Gandibleux, X.: Bounds and bound sets for biobjective combinatorial optimization problems. In: Multiple Criteria Decision Making in the New Millennium, pp. 241–253. Springer (2001)

  10. Ehrgott, M., Gandibleux, X.: Bound sets for biobjective combinatorial optimization problems. Comput. Oper. Res. 34(9), 2674–2694 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  11. Fisher, M.L.: The Lagrangian relaxation method for solving integer programming problems. Manag. Sci. 27(1), 1–18 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  12. Forget, N., Gadegaard, S.L., Nielsen, L.R.: Warm-starting lower bound set computations for branch-and-bound algorithms for multiobjective integer linear programs. Eur. J. Oper. Res. 302(3), 909–924 (2022)

    Article  MATH  Google Scholar 

  13. Gale, D., Kuhn, H.W., Tucker, A.W.: Linear programming and the theory of games. Activity Analysis of Production and Allocation 13, 317–335 (1951)

    MathSciNet  MATH  Google Scholar 

  14. Gandibleux, X., Soleihac, G., Przybylski, A.: vOptSolver: an ecosystem for multi-objective linear optimization. In: JuliaCon 2021 (2021)

  15. Gandibleux, X., Soleilhac, G., Przybylski, A., Lucas, F., Ruzika, S., Halffmann, P.: vOptSolver, a “get and run” solver of multiobjective linear optimization problems built on Julia and JuMP. In: MCDM2017: 24th International Conference on Multiple Criteria Decision Making, vol. 88 (2017)

  16. Gandibleux, X., Soleilhac, G., Przybylski, A., Ruzika, S.: vOptSolver: an open source software environment for multiobjective mathematical optimization. In: IFORS2017: 21st Conference of the International Federation of Oprational Research Societies (2017)

  17. Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22(3), 618–630 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  18. Geoffrion, A.M.: Lagrangean relaxation and its uses in integer programming. Math. Program. 2, 82–114 (1974)

    Article  MathSciNet  Google Scholar 

  19. Gourion, D., Luc, D.: Saddle points and scalarizing sets in multiple objective linear programming. Math. Methods Oper. Res. 80(1), 1–27 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Haimes, Y., Lasdon, L., Wismer, D.: On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Trans. Syst. Man Cybern. SMC–1(3), 296–297 (1971)

    MathSciNet  MATH  Google Scholar 

  21. Halffmann, P., Schäfer, L.E., Dächert, K., Klamroth, K., Ruzika, S.: Exact algorithms for multiobjective linear optimization problems with integer variables: a state of the art survey. J. Multi-Criteria Decis. Anal. 2022, 1–23 (2022)

    Google Scholar 

  22. Hamel, A.H., Heyde, F., Löhne, A., Tammer, C., Winkler, K.: Closing the duality gap in linear vector optimization. J. Convex Anal. 11(1), 163–178 (2004)

    MathSciNet  MATH  Google Scholar 

  23. Heyde, F., Löhne, A.: Geometric duality in multiple objective linear programming. SIAM J. Optim. 19, 836–845 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  24. Heyde, F., Löhne, A., Tammer, C.: Set-valued duality theory for multiple objective linear programs and application to mathematical finance. Math. Methods Oper. Res. 69(1), 159–179 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hooker, J.N.: Integer programming duality. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, pp. 1657–1667. Springer, Boston (2009)

    Google Scholar 

  26. Isermann, H.: Proper efficiency and the linear vector maximum problem. Oper. Res. 22(1), 189–191 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  27. Isermann, H.: On some relations between a dual pair of multiple objective linear programs. Z. Oper. Res. 22(1), 33–41 (1978)

    MathSciNet  MATH  Google Scholar 

  28. Jeroslow, R.: Cutting-plane theory: algebraic methods. Discrete Math. 23(2), 121–150 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  29. Jozefowiez, N., Laporte, G., Semet, F.: A generic branch-and-cut algorithm for multiobjective optimization problems: application to the multilabel traveling salesman problem. INFORMS J. Comput. 24(4), 554–564 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  30. Klamroth, K., Tind, J., Zust, S.: Integer programming duality in multiple objective programming. J. Global Optim. 29(1), 1–18 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  31. Kornbluth, J.: Duality, indifference and sensitivity analysis in multiple objective linear programming. J. Oper. Res. Soc. 25(4), 599–614 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  32. Löhne, A.: Vector Optimization with Infimum and Supremum. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

  33. Luc, D.T.: On duality in multiple objective linear programming. Eur. J. Oper. Res. 210(2), 158–168 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  34. Luc, D.T.: Multiobjective Linear Programming: An Introduction. Springer, Berlin (2016)

    Book  MATH  Google Scholar 

  35. Lust, T., Teghem, J.: Two-phase Pareto local search for the biobjective traveling salesman problem. J. Heuristics 16(3), 475–510 (2010)

    Article  MATH  Google Scholar 

  36. Machuca, E., Mandow, L.: Lower bound sets for biobjective shortest path problems. J. Global Optim. 64(1), 63–77 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  37. Makhorin, A.: GLPK (GNU linear programming kit). https://www.gnu.org/software/glpk (2012)

  38. Mavrotas, G., Diakoulaki, D.: Multi-criteria branch and bound: a vector maximization algorithm for mixed 0–1 multiple objective linear programming. Appl. Math. Comput. 171(1), 53–71 (2005)

    MathSciNet  MATH  Google Scholar 

  39. Özpeynirci, Ö., Köksalan, M.: An exact algorithm for finding extreme supported nondominated points of multiobjective mixed integer programs. Manag. Sci. 56(12), 2302–2315 (2010)

    Article  MATH  Google Scholar 

  40. Przybylski, A., Gandibleux, X.: Multi-objective branch and bound. Eur. J. Oper. Res. 260(3), 856–872 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  41. Przybylski, A., Gandibleux, X., Ehrgott, M.: A recursive algorithm for finding all nondominated extreme points in the outcome set of a multiobjective integer programme. INFORMS J. Comput. 22(3), 371–386 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  42. Przybylski, A., Gandibleux, X., Ehrgott, M.: A two phase method for multi-objective integer programming and its application to the assignment problem with three objectives. Discrete Optim. 7(3), 149–165 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  43. Rödder, W.: A generalized saddlepoint theory: its application to duality theory for linear vector optimum problems. Eur. J. Oper. Res. 1(1), 55–59 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  44. Sourd, F., Spanjaard, O.: A multiobjective branch-and-bound framework: application to the biobjective spanning tree problem. INFORMS J. Comput. 20(3), 472–484 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  45. Teghem, J.: Multi-objective integer linear programming. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, pp. 2448–2454. Springer, Boston (2009)

    Google Scholar 

  46. Ulungu, E.L., Teghem, J.: The two phases method: an efficient procedure to solve bi-objective combinatorial optimization problems. Found. Comput. Decis. Sci. 20(2), 149–165 (1995)

    MathSciNet  MATH  Google Scholar 

  47. Vincent, T., Seipp, F., Ruzika, S., Przybylski, A., Gandibleux, X.: Multiple objective branch and bound for mixed 0–1 linear programming: corrections and improvements for the biobjective case. Comput. Oper. Res. 40(1), 498–509 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  48. Wolsey, L.A.: Integer programming duality: price functions and sensitivity analysis. Math. Program. 20(1), 173–195 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  49. Wolsey, L.A., Nemhauser, G.L.: Integer and Combinatorial Optimization. Wiley, New York (2014)

    MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Science Foundation Grant CMMI-1933373 and the Office of Naval Research Grant N00014-21-1-2262. The authors thank David Mildebrath of Amazon.com, Inc., and Tyler Perini of Rice University for their helpful comments. The authors also thank two anonymous referees and an anonymous associate editor whose insightful feedback helped significantly improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew J. Schaefer.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

Appendix A: Proofs of results in Sect. 2.1

The set-ordering in Definition 4 is well-defined for subsets of \({\mathbb {R}}^k\). We extend it to \(\pm M_{\infty }\) by defining \(-M_{\infty } {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}M_{\infty }\) for all nonempty sets \(S \subseteq {\mathbb {R}}^k\), \(-M_{\infty } {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}-M_{\infty }\), and \(M_{\infty } {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}M_{\infty }\). We further assume that \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\not \preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}-M_{\infty }\), and \(M_{\infty } {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\not \preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}S\) for any \(S \subseteq {\mathbb {R}}^k\). The relation “\({{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}\)” is thus defined on the extended power set of \({\mathbb {R}}^k\) (excluding the empty set) and is transitive thereon, but neither reflexive nor antisymmetric. However, the relation defines a partial order on the family of sets \({\mathcal {E}}\) defined in (1).

Proposition 1

Let \(S \subseteq {\mathbb {R}}^k\) be nonempty. If S has points that are nondominated from above, then \({{\,\mathrm{\textrm{Max}}\,}}(S) \in {\mathcal {E}}\). If S has points that are nondominated from below, then \({{\,\mathrm{\textrm{Min}}\,}}(S) \in {\mathcal {E}}\).

Proof

We prove the result for \({{\,\mathrm{\textrm{Max}}\,}}(S)\); the proof for \({{\,\mathrm{\textrm{Min}}\,}}(S)\) is similar and therefore omitted. Suppose S is nonempty and has points that are nondominated from above. Let \(s,t \in {{\,\mathrm{\textrm{Max}}\,}}(S)\) with \(s \ne t\). By definition of nondominance, \(s \not \le t\) and \(t \not \le s\). Thus, \({{\,\mathrm{\textrm{Max}}\,}}(A) \in {\mathcal {E}}\). \(\square \)

Proposition 2

The relation \({{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}\) defines a partial order on \({\mathcal {E}}\).

Proof

We first show that the relation is reflexive. Consider a set \(S \in {\mathcal {E}}\). If \(S = \pm M_{\infty }\), then \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}S\) by definition of \(\pm M_{\infty }\). Otherwise, if \(s \in S\), then \(s \leqq s\) by the reflexivity of \(\leqq \). Moreover, if \(t \in S\) with \(s \not = t\), then \(t \not \le s\) because distinct elements of S are incomparable as \(S \in {\mathcal {E}}\). Therefore \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}S\).

Next, to see that \({{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}\) is antisymmetric, consider \(S,T \in {\mathcal {E}}\) with \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}T\) and \(T {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}S\). If one of \(S,T = \pm M_{\infty }\), then \(S = T\). Otherwise, let \(S,T \subseteq {\mathbb {R}}^k\) and \(s \in S\). Because \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}T\), there exists \(t \in T\) such that \(s \leqq t\). On the other hand, because \(T {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}S\), there is \(s' \in S\) such that \(t \leqq s'\). Therefore, \(s \leqq t \leqq s'\). But distinct elements of S are incomparable, which implies that \(s = s'\), so that \(s = s' = t\). Therefore \(S \subseteq T\). Similarly, \(T \subseteq S\) as well, so that \(S = T\).

Finally, to establish transitivity, suppose \(S,T,U \in {\mathcal {E}}\) with \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}T\) and \(T {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). If \(S = -M_{\infty }\) or \(U = M_{\infty }\), the results holds trivially. If S or \(T = M_{\infty }\), then \(U= M_{\infty }\) and the result holds. Similarly, if T or U equals \(-M_{\infty }\), then so does S and the result follows. Assume, therefore, that \(S,T,U \ne \pm M_{\infty }\).

Let \(s \in S\). Then, there exists \(t \in T\) such that \(s \leqq t\) and \(u \in U\) such that \(t \leqq u\). Therefore \(s \leqq u\). On the other hand, suppose there exists \(u \in U\) and \(s \in S\) such that \(u \le s\). Then, there is \(t \in T\) such that \(s \leqq t\), which implies that \(u \le s \leqq t\). This contradicts the hypothesis that \(T {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). So, s and u must be incomparable. Therefore, \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\).

Thus, the relation \({{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}\) is reflexive, antisymmetric and transitive on \({\mathcal {E}}\) and therefore defines a partial order thereon. \(\square \)

Lemma 2

Let \(T \subseteq S \subseteq {\mathbb {R}}^k\) be nonempty sets, and let \(U \in {\mathcal {E}}\) such that \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). Then, \(T {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\).

Proof

If \(U = M_{\infty }\), the result holds by definition. Therefore, suppose \(U \ne M_{\infty }\), and let \(t \in T\). Because \(t \in S\), there is an element \(u \in U\) such that \(t \leqq u\). Moreover, given \(u \in U\), there is no \(t \in T\) such that \(u \le t\) because that would contradict \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). Thus, \(T {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). \(\square \)

Lemma 3

Let \(S \subseteq {\mathbb {R}}^k\) be a closed set and let \(U \in {\mathcal {E}}\) such that \(S {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). Then, \({{\,\mathrm{\textrm{Max}}\,}}(S) {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\).

Proof

If \(S = \emptyset \), then \({{\,\mathrm{\textrm{Max}}\,}}(S) = -M_{\infty } {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). If S is unbounded above, then we must have \(U = M_{\infty }\) and \({{\,\mathrm{\textrm{Max}}\,}}(S) = M_{\infty } {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}U\). Suppose, therefore, that S is nonempty and bounded above. The result then follows from Lemma 2 by choosing \(T = {{\,\mathrm{\textrm{Max}}\,}}(S)\). \(\square \)

Appendix B: Omitted proofs from Sect. 3

Proposition 5

If \({\mathcal {Y}}_{\textrm{MOLP}}\) is the set of feasible objective values of (MOLP), then \({{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOIP}}) {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}{{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOLP}})\).

Proof

If (MOIP) is infeasible, then \({{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOIP}}) = -M_{\infty }\) and the result is trivially true. If (MOIP) is unbounded then so is (MOLP) because \({\mathcal {Y}}_{\textrm{MOIP}}\subseteq {\mathcal {Y}}_{\textrm{MOLP}}\). Finally, if \(Cx^* \in {{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOIP}})\), then \(Cx^*\) is a feasible objective to (MOLP). Therefore either \({{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOLP}}) = M_{\infty }\) or there exists \(y \in {{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOLP}})\) such that \(Cx^* \leqq y\).

Now suppose \(C{\tilde{x}} \in {{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOLP}})\) such that \(Cx^* \not = C{\tilde{x}}\). Suppose if possible that \(C{\tilde{x}} \le Cx^*\). Then, because \(x^* \in {\mathcal {X}}\), it is also feasible to (MOLP), which contradicts the nondominance of \({\tilde{x}}\). Thus, \(C{\tilde{x}} \not \le Cx^*\) so that \(Cx^*\) and \(C{\tilde{x}}\) are incomparable. \(\square \)

Proposition 6

If \({\mathcal {Y}}_{\textrm{CH}}\) is the set of feasible objective values of (CH), then \({{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{MOIP}}) {{\,\mathrm{\hspace{1.111pt}\underline{\hspace{-1.111pt}\preceq \hspace{-0.83328pt}}\hspace{1.111pt}}\,}}{{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{CH}})\).

Proof

The proof is similar to that of Proposition 5 and is omitted. \(\square \)

Proposition 7

Let \(x^*\) be an efficient solution of (CH). Then, \(x^*\) is an efficient solution of (MOIP) if and only if \(x^*\) is integral.

Proof

If \(x^*\) is efficient for (MOIP), then it must be feasible to (MOIP) and therefore integral. To see the opposite containment, suppose \(x^*\) is efficient for (CH) and integral. Then, \(x^*\) is an integral point in the feasible region of (CH) and therefore \(x^{*} \in {\mathcal {X}}_{\textrm{MOIP}}\). Suppose if possible that \(x^*\) is not efficient for (MOIP). Proposition 6 implies that for every feasible x to (MOIP), there is a y feasible to (CH) such that \(Cx \leqq Cy\). In particular, if \(x^*\) is not efficient for (MOIP), then there is a y feasible to (CH) such that \(Cx^*\le Cy\). However, this contradicts the hypothesis that \(x^{*}\) was efficient to (CH). Thus, \(x^*\) must be efficient for (MOIP). \(\square \)

Proposition 8

If (CH) has efficient solutions, then it must have an integral efficient solution.

Proof

Suppose that \(\text {conv}({\mathcal {X}}_{\textrm{MOIP}})\) has a vertex. If an MOLP has efficient solutions, then at least one must occur at a vertex of the feasible region [34, Theorem 4.3.8(ii)]. In particular, (CH) has at least one efficient vertex. Because the vertices of the feasible region of (CH) are all integral, this implies that there is at least one integral efficient solution to (CH).

Now suppose that \(\text {conv}({\mathcal {X}}_{\textrm{MOIP}})\) does not have vertices. Then, by [34, Theorem 4.3.8(iii)], it has a nonempty face F such that every \(x \in F\) is an efficient solution. Because F is a face of \(\text {conv}({\mathcal {X}}_{\textrm{MOIP}})\), there is a matrix B and a vector d such that \(F = \{x \in \text {conv}({\mathcal {X}}_{\textrm{MOIP}})\ \vert \ Bx = d\}\) and such that \(Bx \leqq d\) for all \(x \in {\mathcal {X}}_{\textrm{MOIP}}\). If \(x^* \in F\), then \(x^* = \sum _{i = 1}^{p} t^ix^i\) for some positive integer p, \(t^1, t^2, \ldots ,t^p \in [0,1]\) and \(x^1,x^2, \ldots , x^p\in {\mathcal {X}}_{\textrm{MOIP}}\) with \(\sum _{i = 1}^{p}t^i = 1\) because \(F \subseteq \text {conv}({\mathcal {X}}_{\textrm{MOIP}})\). Then,

$$\begin{aligned} Bx^* = B\Bigg (\sum _{i = 1}^{p}t^ix^i\Bigg )=\sum _{i = 1}^{p}t^iBx^i = d. \end{aligned}$$

Because \(Bx^i\leqq d\) for each i, this implies that \(Bx^i = d\) for each i. So, \(x^i\in F\) and therefore (CH) has an integral efficient solution. \(\square \)

Proposition 9

([8, 42]) Let \(x^*\) be an efficient solution of (MOIP). Then, \(x^*\) is a supported solution if and only if it is an efficient solution of (CH).

Proof

Let \(x^*\) be an efficient solution of (CH). By Lemma 1, there is a scalarizing vector \(\mu \in {\mathbb {R}}_{>}\) such that \(x^*\) is optimal to

figure d

On the other hand, because \(x^*\) is feasible to (MOIP), it is feasible to the IP

figure e

Note that (CH\(_\mu \)) is the convex-hull relaxation of the single-objective IP (MOIP\(_\mu \)). Thus, \(x^*\) must be optimal to (MOIP\(_\mu \)). Therefore, \(x^*\) is a supported efficient solution to (MOIP).

Conversely, suppose \(x^*\) is a supported efficient solution of (MOIP). Then, there exists a scalarizing vector \(\mu \in {\mathbb {R}}^k_>\) such that \(x^*\) is an optimal solution to (MOIP\(_\mu \)). Once again, because (CH\(_\mu \)) is the convex hull relaxation of (MOIP\(_\mu \)), \(x^*\) is optimal to (CH\(_\mu \)) as well. It follows from Lemma 1 that \(x^*\) is efficient for (CH). \(\square \)

Appendix C: An example to illustrate Theorem 7

This example illustrates Theorem 7. We consider an MOIP whose feasible region does not satisfy condition (15) and show that the Lagrangian dual for this problem is not strong at a supported solution.

Example 6

Consider the problem

$$\begin{aligned} \begin{aligned} \begin{aligned} \max&\begin{bmatrix} 1 &{} \quad 0 \\ 0 &{}\quad 1\end{bmatrix}\begin{bmatrix} x_1\\ x_2 \end{bmatrix} \quad \text {s.t. } 2x_1 + 4x_2 \leqq 5,\ 4x_1 + 2x_2 \leqq 5,\ x_1, x_2 \in \{0,1\}. \end{aligned} \end{aligned} \end{aligned}$$

For this problem, \({\mathcal {X}} = {\mathcal {Y}} = \{(0,0)^\top ,(1,0)^\top ,(0,1)^\top \}\) and the supported nondominated points are \((1,0)^\top \) and \((0,1)^\top \). Set \(Q = \{0,1\}^2\), \(A^1 = \begin{bmatrix} 2 &{}\quad 4\\ 4 &{}\quad 2 \end{bmatrix}\), and \(b^1 = (5, 5)^\top \). Then,

$$\begin{aligned} \begin{aligned} \text {conv}(Q \cap \{x \in {\mathbb {R}}^n\ \vert \ A^1x \leqq b^1\}) \subset \text {conv}(Q) \cap \{x \in {\mathbb {R}}^n\ \vert \ A^1x \leqq b^1\}, \end{aligned} \end{aligned}$$

where the containment is strict. Enumerating \(x \in Q\), we have the Lagrangian dual

$$\begin{aligned} \begin{aligned} {{\,\mathrm{\textrm{Min}}\,}}({\mathcal {Y}}_{\textrm{LD}}) = {{\,\mathrm{\textrm{Min}}\,}}\Bigg ( \bigcup \limits _{\varLambda \geqq 0} {{\,\mathrm{\textrm{Max}}\,}}\Bigg \{&\begin{pmatrix} 5\lambda _{11} + 5\lambda _{12}\\ 5\lambda _{21} + 5\lambda _{22} \end{pmatrix}, \begin{pmatrix} 3\lambda _{11} + \lambda _{12}\\ 1 + \lambda _{21} + 3\lambda _{22} \end{pmatrix}, \\&\begin{pmatrix} 1 +\lambda _{11} + 3\lambda _{12}\\ 3\lambda _{21} + \lambda _{22} \end{pmatrix}, \begin{pmatrix} 1-\lambda _{11} -\lambda _{12}\\ 1-\lambda _{21} -\lambda _{22} \end{pmatrix} \Bigg \} \Bigg ). \end{aligned} \end{aligned}$$

Then, there is no \(\varLambda \in \mathbb {R}^{2 \times 2}_{\geqq }\) such that \((1,0)^\top \in {{\,\mathrm{\textrm{Min}}\,}}({\mathcal {Y}}_{\textrm{LD}})\). To see this, first suppose that \(\varLambda \) was such that \(\begin{pmatrix} 5\lambda _{11} + 5\lambda _{12}\\ 5\lambda _{21} + 5\lambda _{22} \end{pmatrix} = (1,0)^\top \). Then, we must have \(\lambda _{11} + \lambda _{12} = \frac{1}{5}\) and \(\lambda _{21} = \lambda _{22} = 0\). But then, because one of \(\lambda _{11},\lambda _{12}\) must be positive, there are positive parameters \(\delta _1 = 3\lambda _{11} + \lambda _{12}\) and \(\delta _2 = \lambda _{11} + 3\lambda _{22}\) with \(\delta _1,\delta _2 \ge \frac{1}{5}\) such that the relaxation (LR(\(\varLambda \))) is

$$\begin{aligned} \begin{aligned} {{\,\mathrm{\textrm{Max}}\,}}\left\{ \begin{pmatrix}1\\ 0 \end{pmatrix}, \begin{pmatrix}\delta _1\\ 1 \end{pmatrix}, \begin{pmatrix}1+\delta _2\\ 0 \end{pmatrix}, \begin{pmatrix}\frac{4}{5}\\ 1 \end{pmatrix} \right\} . \end{aligned} \end{aligned}$$

Then, \(1+\delta _2 \ge \frac{6}{5} > 1\), which implies that \((1,0)\not \in {{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{LR}(\varLambda )})\).

Because \(\lambda _{21},\lambda _{22} \ge 0\), there is no feasible \(\varLambda \) such that \(1+ \lambda _{21} +3\lambda _{22} = 0\). So, \(\begin{pmatrix} 3\lambda _{11} + \lambda _{12}\\ 1 + \lambda _{21} + 3\lambda _{22} \end{pmatrix} \not = \begin{pmatrix}1\\ 0 \end{pmatrix}.\)

Next, if \(\begin{pmatrix} 1 +\lambda _{11} + 3\lambda _{12}\\ 3\lambda _{21} + \lambda _{22} \end{pmatrix} = \begin{pmatrix}1\\ 0 \end{pmatrix}\), then \(\varLambda = 0\). However, \((1,0) \not \in {{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{LR}(0)})\) because

$$\begin{aligned} \begin{aligned}{{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{LR}(0)})={{\,\mathrm{\textrm{Max}}\,}}\{(0,0),(0,1),(1,0),(1,1) \} = \{(1,1)\}. \end{aligned} \end{aligned}$$

Finally, if \(\varLambda \) was such that \(\begin{pmatrix} 1-\lambda _{11}\quad -\lambda _{12}\\ \ 1-\lambda _{21} \quad -\lambda _{22} \end{pmatrix} = \begin{pmatrix} 1\\ 0\end{pmatrix}\) then \(\lambda _{11} = \lambda _{12} = 0\) and \(\lambda _{21} + \lambda _{22} = 1\). Then, there are positive parameters \(\delta _1 = \lambda _{21} + 3\lambda _{22}\) and \(\delta _2 = 3\lambda _{21} + \lambda _{22}\) such that \(\delta _1, \delta _2 \ge 1\) and

$$\begin{aligned} \begin{aligned} {{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{LR}(\varLambda )}) = {{\,\mathrm{\textrm{Max}}\,}}\left\{ \begin{pmatrix} 0\\ 5 \end{pmatrix}, \begin{pmatrix} 0\\ 1+\delta _1 \end{pmatrix}, \begin{pmatrix}1 \\ \delta _2 \end{pmatrix}, \begin{pmatrix} 1\\ 0 \end{pmatrix} \right\} , \end{aligned} \end{aligned}$$

so that \((1,0) \not \in {{\,\mathrm{\textrm{Max}}\,}}({\mathcal {Y}}_{\textrm{LR}(\varLambda )})\) because \(\delta _2 \ge 1\). Therefore, there are no Lagrangian relaxations which are tight at \((1,0)^\top \). Moreover, the relaxations are bounded away from \((1,0)^\top \) so that \((1,0)^\top \) cannot be a limit point of \({\mathcal {Y}}_{\textrm{LD}}\).

Thus, condition (15) is not satisfied, and the Lagrangian dual for this problem is not strong at a supported efficient solution. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dunbar, A., Sinha, S. & Schaefer, A.J. Relaxations and duality for multiobjective integer programming. Math. Program. (2023). https://doi.org/10.1007/s10107-023-02022-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10107-023-02022-7

Keywords

Mathematics Subject Classification

Navigation