Abstract
This paper focuses on distributed consensus optimization problems with coupled constraints over time-varying multi-agent networks, where the global objective is the finite sum of all agents’ private local objective functions, and decision variables of agents are subject to coupled equality and inequality constraints and a compact convex subset. Each agent exchanges information with its neighbors and processes local data. They cooperate to agree on a consensual decision vector that is an optimal solution to the considered optimization problems. We integrate ideas behind dynamic average consensus and primal-dual methods to develop a distributed algorithm and establish its sublinear convergence rate. In numerical simulations, to illustrate the effectiveness of the proposed algorithm, we compare it with some related methods by the Neyman–Pearson classification problem.
Similar content being viewed by others
Data Availability Statements
The authors declare that all data supporting the findings of this study are available within the article and its supplementary information files.
References
Alessandro, F., Ivano, N., Giuseppe, N., Maria, P.: Tracking-ADMM for distributed constraint-coupled optimization. Automatica 117, 108,962 (2020)
Alessandro, F., Kostas, M., Simone, G., Maria, P.: Dual decomposition for multi-agent distributed optimization with coupling constraints. Automatica 84, 149–158 (2017)
Alessandro, F., Maria, P.: Distributed decision-coupled constrained optimization via proximal-tracking. Automatica 135, 109,938 (2022)
Alessandro, F., Maria, P.: Augmented Lagrangian tracking for distributed optimization with equality and inequality coupling constraints. Automatica 157, 111,269 (2023)
Alghunaim, S.A., Lyu, Q., Yan, M., Sayed, A.H.: Dual consensus proximal algorithm for multi-agent sharing problems. IEEE Trans. Signal Process. 69, 5568–5579 (2021)
Arauz, T., Chanfreut, P., Maestre, J.: Cyber-security in networked and distributed model predictive control. Annu. Rev. Control 53, 338–355 (2022)
Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in Linear and Nonlinear Programming. Stanford University Press, Palo Alto (1958)
Carli, R., Dotoli, M.: Distributed alternating direction method of multipliers for linearly constrained optimization over a network. IEEE Control Syst. Lett. 4(1), 247–252 (2020)
Chang, T.H.: A proximal dual consensus ADMM method for multi-agent constrained optimization. IEEE Trans. Signal Process. 64(14), 3719–3734 (2016)
Chang, T.H., Nedić, A., Scaglione, A.: Distributed constrained optimization by consensus-based primal-dual perturbation method. IEEE Trans. Autom. Control 59(6), 1524–1538 (2014)
CVX Researc Inc.: CVX: Matlab software for disciplined convex programming (2012). http://cvxr.com/cvx/
Hamedani, E.Y., Aybat, N.S.: Multi-agent constrained optimization of a strongly convex function over time-varying directed networks. In: 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 518–525 (2017)
Hao, X., Liang, Y., Li, T.: Distributed estimation for multi-subsystem with coupled constraints. IEEE Trans. Signal Process. 70, 1548–1559 (2022)
Ion, N., Valentin, N.: On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems. Automatica 55, 209–216 (2015)
Lan, G., Zhou, Z.: Algorithms for stochastic optimization with function or expectation constraints. Comput. Optim. Appl. 76, 461–498 (2020)
Lei, J., Chen, H.F., Fang, H.T.: Primal-dual algorithm for distributed constrained optimization. Syst. Control Lett. 96, 110–117 (2016)
Li, X., Feng, G., Xie, L.: Distributed proximal algorithms for multiagent optimization with coupled inequality constraints. IEEE Trans. Autom. Control 66(3), 1223–1230 (2021)
Li, Z., Shi, W., Yan, M.: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Signal Process. 67(17), 4494–4506 (2019)
Liang, S., Wang, L.Y., Yin, G.: Distributed smooth convex optimization with coupled constraints. IEEE Trans. Autom. Control 65(1), 347–353 (2020)
Liao, S.: A fast distributed algorithm for coupled utility maximization problem with application for power control in wireless sensor networks. J. Commun. Netw. 23(4), 271–280 (2021)
Liu, C., Li, H., Shi, Y.: A unitary distributed subgradient method for multi-agent optimization with different coupling sources. Automatica 114, 108,834 (2020)
Liu, H., Yu, W., Chen, G.: Discrete-time algorithms for distributed constrained convex optimization with linear convergence rates. IEEE Trans. Cybern. 52(6), 4874–4885 (2022)
Liu, Q., Yang, S., Hong, Y.: Constrained consensus algorithms with fixed step size for distributed convex optimization over multiagent networks. IEEE Trans. Autom. Control 62(8), 4259–4265 (2017)
Liu, T., Han, D., Lin, Y., Liu, K.: Distributed multi-UAV trajectory optimization over directed networks. J. Frankl. Inst. 358(10), 5470–5487 (2021)
Mateos-Núñez, D., Cortés, J.: Distributed saddle-point subgradient algorithms with Laplacian averaging. IEEE Trans. Autom. Control 62(6), 2720–2735 (2017)
Nedić, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27(4), 2597–2633 (2017)
Nedić, A., Olshevsky, A., Shi, W.: Improved convergence rates for distributed resource allocation. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 172–177 (2018)
Nedić, A., Ozdaglar, A.: Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim. 19(4), 1757–1780 (2009)
Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)
Nedić, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 55(4), 922–938 (2010)
Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140, 125–161 (2013)
Notarnicola, I., Notarstefano, G.: Constraint-coupled distributed optimization: a relaxation and duality approach. IEEE Trans. Control Netw. Syst. 7(1), 483–492 (2020)
Nowak, R.: Distributed EM algorithms for density estimation and clustering in sensor networks. IEEE Trans. Signal Process. 51(8), 2245–2253 (2003)
Polyak, B.: Introduction to Optimization. Optimization Software, New York (2020)
Pu, S., Shi, W., Xu, J., Nedić, A.: Push-pull gradient methods for distributed optimization in networks. IEEE Trans. Autom. Control 66(1), 1–16 (2021)
Qu, G., Li, N.: Harnessing smoothness to accelerate distributed optimization. IEEE Trans. Control Netw. Syst. 5(3), 1245–1260 (2018)
Rockafellar, R.T.: Convex Analysis. Princeton Mathematical Series (1970)
Rockafellar, R.T., Uryasev, S.: Optimization of conditional value-at-risk. J. Risk 2, 21–41 (2000)
Saadatniaki, F., Xin, R., Khan, U.A.: Decentralized optimization over time-varying directed graphs with row and column-stochastic matrices. IEEE Trans. Autom. Control 65(11), 4769–4780 (2020)
Shi, W., Ling, Q., Wu, G., Yin, W.: Extra: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)
Shi, W., Ling, Q., Wu, G., Yin, W.: A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process. 63(22), 6013–6023 (2015)
Simonetto, A., Jamali-Rad, H.: Primal recovery from consensus-based dual decomposition for distributed convex optimization. J. Optim. Theory Appl. 168(1), 172–197 (2016)
Su, Y., Wang, Q., Sun, C.: Distributed primal-dual method for convex optimization with coupled constraints. IEEE Trans. Signal Process. 70, 523–535 (2022)
Tong, X., Feng, Y., Zhao, A.: A survey on Neyman–Pearson classification and suggestions for future research. WIREs Comput. Stat. 8(2), 64–81 (2016)
Wiltz, A., Chen, F., Dimos, V.D.: A consistency constraint-based approach to coupled state constraints in distributed model predictive control. In: 2022 IEEE 61st Conference on Decision and Control (CDC) pp. 3959–3964 (2022)
Workbench team C: A marketing dataset (2008). http://www.causality.inf.ethz.ch/data/CINA.html
Wu, X., Wang, H., Lu, J.: Distributed optimization with coupling constraints. IEEE Trans. Autom. Control 68(3), 1847–1854 (2023)
Xu, J., Tian, Y., Sun, Y., Scutari, G.: Distributed algorithms for composite optimization: unified framework and convergence analysis. IEEE Trans. Signal Process. 69, 3555–3570 (2021)
Xu, J., Zhu, S., Soh, Y.C., Xie, L.: Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes. In: 2015 54th IEEE Conference on Decision and Control (CDC), pp. 2055–2060 (2015)
Yuan, D., Proutiere, A., Shi, G.: Distributed online linear regressions. IEEE Trans. Inf. Theory 67(1), 616–639 (2021)
Zhou, X., Ma, Z., Zou, S., Zhang, J.: Consensus-based distributed economic dispatch for multi micro energy grid systems under coupled carbon emissions. Appl. Energy 324, 119641 (2022)
Zhu, K., Tang, Y.: Primal-dual \(\varepsilon \)-subgradient method for distributed optimization. J. Syst. Sci. Complex. 36, 577–590 (2020)
Zhu, M., Martinez, S.: On distributed convex optimization under inequality and equality constraints. IEEE Trans. Autom. Control 57(1), 151–164 (2012)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Jalal M. Fadili.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Liwei Zhang: This author was supported by the Natural Science Foundation of China (Nos. 11971089 and 11731013) and partially supported by Dalian High-level Talent Innovation Project (No. 2020RD09).
Appendices
Proof of Lemma 4.1
For simplicity, denote \(\delta _k=V_{k+1}^{-1}\left( \nabla \textrm{L}(\omega ^{k+1})-\nabla \textrm{L}(\omega ^k)\right) \), (4.5) can be equivalently written as \(s^{k+1}=\mathcal {R}_ks^k+\delta _k\). Furthermore, \(s^{k+1}=\varPhi _\mathcal {R}(k,1)s^1+\sum _{l=1}^{k-1}\varPhi _\mathcal {R}(k,l+1)\delta _l+\delta _k\). Since \(\{\frac{1}{2m}v_k\}\) is an absolute probability sequence for the matrix sequence \(\{\mathcal {R}_k\}\), it follows that
Applying the subadditivity of \(\Vert \cdot \Vert \) yields
For the matrix sequence \(\{\varPhi _\mathcal {R}(k,s)\}\), it holds the similar results in Lemma C.3, i.e.,
where \(\rho \in (0,1)\) and \(C_1\) is some positive constant. Due to the compactness of the subset \(\mathcal {X}\), it is clear that \(\delta _k\) is bounded. Denote the upper bound of \(\Vert \delta _k\Vert \) by \(\widetilde{C}\). Therefore,
Proof of Lemma 4.2
Firstly, we need to verify the following inequality,
Then, combining the conditions on the step-size sequence \(\{\alpha _k\}\) in Assumption 3.1 and the results of Lemma C.4, it is easy to demonstrate the claimed results.
(a): From (4.4), there exist \(e^k\) such that \(\omega ^{k+1}=\mathcal {A}_k\omega ^k-\alpha _kV_ks^k+e^k\). Similar to the inequality (A.1), it holds that
Since \(\omega ^{k+1}\) is the projection of \(\mathcal {A}_k\omega ^k-\alpha _kV_ks^k\) onto \(\varOmega \), for any \(\omega \in \varOmega \), one has
which implies that
From Lemma 4.1, it follows that
In view of the results in Lemmas C.3 and C.4, one can get a modified version of the inequality (B.1) as follows,
Under Assumption 3.1 and Lemma C.4 (a), it’s clear that
(b): Multiplying the inequality (B.2) by \(\alpha _{k+1}\) at the both sides yields
where \(C_2, C_3, C_4, C_5\) are some positive constants. Once again, under Assumption 3.1 and Lemma C.4 (b), it is concluded that
In Lemma 4.2, the claimed results about the sequence \(\{s^k\}\) is parallel with \(\{\omega ^k\}\) and its proof is omitted here.
Supporting Lemmas
Lemma C.1
([29], Lemma 4) Under Assumptions 2.1 and 2.2, it holds that
-
(a)
\(\lim _{k\rightarrow \infty }\varPhi _A(k,s)=1_{2m}\mu _s^\top \), where \(\mu _s\in \mathbb {R}^{2m}\) is a stochastic vector for each s.
-
(b)
For any i, the entries \(\varPhi _\mathcal {A}(k,s)_{ij}, j=1,\dots ,2m\), converge to the same limit \((\mu _s)_i\) as \(k\rightarrow \infty \) with a geometric rate, i.e., for each i and \(s\ge 0\),
$$\begin{aligned} \left| \varPhi _\mathcal {A}(k,s)_{ij}-\left( \mu _s\right) _i\right| \le C\beta ^{k-s},\ \forall \, k\ge s, \end{aligned}$$where \(C=2\frac{1+2\eta ^{-B_0}}{1-2\eta ^{B_0}}\left( 1-\eta \right) ^{1/{B_0}}\), \(\beta =(1-\eta ^{B_0})\in (0,1)\), \(\eta \) is the lower bound of Assumption 2.2, \(B_0=(m-1)B\), ingeter B is defined by Assumption 2.1.
Lemma C.2
([39], Corollary 1) Under the assumptions of Lemma C.1, the sequence \(\{\phi _k\}\) is an absolute probability sequence for the matrix sequence \(\{\mathcal {A}_k\}\), where
Lemma C.3
([34], Lemma 11, Chapter 2.2) Let \(\{b_k\}, \{c_k\}, \{d_k\}\) be non-negative sequences. Suppose that \(\sum _{k=0}^\infty c_k<\infty \) and
then the sequence \(\{b_k\}\) converges and \(\sum _{k=0}^\infty d_k<\infty \).
Lemma C.4
([30], Lemma 7) Let \(\beta \in (0,1)\), and \(\{\gamma _k\}\) be a positive scalar sequence.
-
(a)
If \(\lim _{k\rightarrow \infty }\gamma _k=0\), then \(\lim _{k\rightarrow \infty }\sum _{l=0}^k\beta ^{k-l}\gamma _l=0\).
-
(b)
Furthermore, if \(\sum _{k=0}^\infty \gamma _k<\infty \), then \(\sum _{k=0}^\infty \sum _{l=0}^k\beta ^{k-l}\gamma _l<\infty \).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Gong, K., Zhang, L. Primal-Dual Algorithm for Distributed Optimization with Coupled Constraints. J Optim Theory Appl 201, 252–279 (2024). https://doi.org/10.1007/s10957-024-02393-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-024-02393-7