Skip to main content
Log in

Primal-Dual Algorithm for Distributed Optimization with Coupled Constraints

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper focuses on distributed consensus optimization problems with coupled constraints over time-varying multi-agent networks, where the global objective is the finite sum of all agents’ private local objective functions, and decision variables of agents are subject to coupled equality and inequality constraints and a compact convex subset. Each agent exchanges information with its neighbors and processes local data. They cooperate to agree on a consensual decision vector that is an optimal solution to the considered optimization problems. We integrate ideas behind dynamic average consensus and primal-dual methods to develop a distributed algorithm and establish its sublinear convergence rate. In numerical simulations, to illustrate the effectiveness of the proposed algorithm, we compare it with some related methods by the Neyman–Pearson classification problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability Statements

The authors declare that all data supporting the findings of this study are available within the article and its supplementary information files.

References

  1. Alessandro, F., Ivano, N., Giuseppe, N., Maria, P.: Tracking-ADMM for distributed constraint-coupled optimization. Automatica 117, 108,962 (2020)

    MathSciNet  Google Scholar 

  2. Alessandro, F., Kostas, M., Simone, G., Maria, P.: Dual decomposition for multi-agent distributed optimization with coupling constraints. Automatica 84, 149–158 (2017)

    MathSciNet  Google Scholar 

  3. Alessandro, F., Maria, P.: Distributed decision-coupled constrained optimization via proximal-tracking. Automatica 135, 109,938 (2022)

    MathSciNet  Google Scholar 

  4. Alessandro, F., Maria, P.: Augmented Lagrangian tracking for distributed optimization with equality and inequality coupling constraints. Automatica 157, 111,269 (2023)

    MathSciNet  Google Scholar 

  5. Alghunaim, S.A., Lyu, Q., Yan, M., Sayed, A.H.: Dual consensus proximal algorithm for multi-agent sharing problems. IEEE Trans. Signal Process. 69, 5568–5579 (2021)

    MathSciNet  Google Scholar 

  6. Arauz, T., Chanfreut, P., Maestre, J.: Cyber-security in networked and distributed model predictive control. Annu. Rev. Control 53, 338–355 (2022)

    MathSciNet  Google Scholar 

  7. Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in Linear and Nonlinear Programming. Stanford University Press, Palo Alto (1958)

    Google Scholar 

  8. Carli, R., Dotoli, M.: Distributed alternating direction method of multipliers for linearly constrained optimization over a network. IEEE Control Syst. Lett. 4(1), 247–252 (2020)

    MathSciNet  Google Scholar 

  9. Chang, T.H.: A proximal dual consensus ADMM method for multi-agent constrained optimization. IEEE Trans. Signal Process. 64(14), 3719–3734 (2016)

    MathSciNet  Google Scholar 

  10. Chang, T.H., Nedić, A., Scaglione, A.: Distributed constrained optimization by consensus-based primal-dual perturbation method. IEEE Trans. Autom. Control 59(6), 1524–1538 (2014)

    MathSciNet  Google Scholar 

  11. CVX Researc Inc.: CVX: Matlab software for disciplined convex programming (2012). http://cvxr.com/cvx/

  12. Hamedani, E.Y., Aybat, N.S.: Multi-agent constrained optimization of a strongly convex function over time-varying directed networks. In: 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 518–525 (2017)

  13. Hao, X., Liang, Y., Li, T.: Distributed estimation for multi-subsystem with coupled constraints. IEEE Trans. Signal Process. 70, 1548–1559 (2022)

    MathSciNet  Google Scholar 

  14. Ion, N., Valentin, N.: On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems. Automatica 55, 209–216 (2015)

    MathSciNet  Google Scholar 

  15. Lan, G., Zhou, Z.: Algorithms for stochastic optimization with function or expectation constraints. Comput. Optim. Appl. 76, 461–498 (2020)

    MathSciNet  Google Scholar 

  16. Lei, J., Chen, H.F., Fang, H.T.: Primal-dual algorithm for distributed constrained optimization. Syst. Control Lett. 96, 110–117 (2016)

    MathSciNet  Google Scholar 

  17. Li, X., Feng, G., Xie, L.: Distributed proximal algorithms for multiagent optimization with coupled inequality constraints. IEEE Trans. Autom. Control 66(3), 1223–1230 (2021)

    MathSciNet  Google Scholar 

  18. Li, Z., Shi, W., Yan, M.: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Signal Process. 67(17), 4494–4506 (2019)

    MathSciNet  Google Scholar 

  19. Liang, S., Wang, L.Y., Yin, G.: Distributed smooth convex optimization with coupled constraints. IEEE Trans. Autom. Control 65(1), 347–353 (2020)

    MathSciNet  Google Scholar 

  20. Liao, S.: A fast distributed algorithm for coupled utility maximization problem with application for power control in wireless sensor networks. J. Commun. Netw. 23(4), 271–280 (2021)

    MathSciNet  Google Scholar 

  21. Liu, C., Li, H., Shi, Y.: A unitary distributed subgradient method for multi-agent optimization with different coupling sources. Automatica 114, 108,834 (2020)

    MathSciNet  Google Scholar 

  22. Liu, H., Yu, W., Chen, G.: Discrete-time algorithms for distributed constrained convex optimization with linear convergence rates. IEEE Trans. Cybern. 52(6), 4874–4885 (2022)

    Google Scholar 

  23. Liu, Q., Yang, S., Hong, Y.: Constrained consensus algorithms with fixed step size for distributed convex optimization over multiagent networks. IEEE Trans. Autom. Control 62(8), 4259–4265 (2017)

    MathSciNet  Google Scholar 

  24. Liu, T., Han, D., Lin, Y., Liu, K.: Distributed multi-UAV trajectory optimization over directed networks. J. Frankl. Inst. 358(10), 5470–5487 (2021)

    MathSciNet  Google Scholar 

  25. Mateos-Núñez, D., Cortés, J.: Distributed saddle-point subgradient algorithms with Laplacian averaging. IEEE Trans. Autom. Control 62(6), 2720–2735 (2017)

    MathSciNet  Google Scholar 

  26. Nedić, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27(4), 2597–2633 (2017)

    MathSciNet  Google Scholar 

  27. Nedić, A., Olshevsky, A., Shi, W.: Improved convergence rates for distributed resource allocation. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 172–177 (2018)

  28. Nedić, A., Ozdaglar, A.: Approximate primal solutions and rate analysis for dual subgradient methods. SIAM J. Optim. 19(4), 1757–1780 (2009)

    MathSciNet  Google Scholar 

  29. Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)

    MathSciNet  Google Scholar 

  30. Nedić, A., Ozdaglar, A., Parrilo, P.A.: Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 55(4), 922–938 (2010)

    MathSciNet  Google Scholar 

  31. Nesterov, Y.: Gradient methods for minimizing composite functions. Math. Program. 140, 125–161 (2013)

    MathSciNet  Google Scholar 

  32. Notarnicola, I., Notarstefano, G.: Constraint-coupled distributed optimization: a relaxation and duality approach. IEEE Trans. Control Netw. Syst. 7(1), 483–492 (2020)

    MathSciNet  Google Scholar 

  33. Nowak, R.: Distributed EM algorithms for density estimation and clustering in sensor networks. IEEE Trans. Signal Process. 51(8), 2245–2253 (2003)

    Google Scholar 

  34. Polyak, B.: Introduction to Optimization. Optimization Software, New York (2020)

    Google Scholar 

  35. Pu, S., Shi, W., Xu, J., Nedić, A.: Push-pull gradient methods for distributed optimization in networks. IEEE Trans. Autom. Control 66(1), 1–16 (2021)

    MathSciNet  Google Scholar 

  36. Qu, G., Li, N.: Harnessing smoothness to accelerate distributed optimization. IEEE Trans. Control Netw. Syst. 5(3), 1245–1260 (2018)

    MathSciNet  Google Scholar 

  37. Rockafellar, R.T.: Convex Analysis. Princeton Mathematical Series (1970)

  38. Rockafellar, R.T., Uryasev, S.: Optimization of conditional value-at-risk. J. Risk 2, 21–41 (2000)

    Google Scholar 

  39. Saadatniaki, F., Xin, R., Khan, U.A.: Decentralized optimization over time-varying directed graphs with row and column-stochastic matrices. IEEE Trans. Autom. Control 65(11), 4769–4780 (2020)

    MathSciNet  Google Scholar 

  40. Shi, W., Ling, Q., Wu, G., Yin, W.: Extra: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)

    MathSciNet  Google Scholar 

  41. Shi, W., Ling, Q., Wu, G., Yin, W.: A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process. 63(22), 6013–6023 (2015)

    MathSciNet  Google Scholar 

  42. Simonetto, A., Jamali-Rad, H.: Primal recovery from consensus-based dual decomposition for distributed convex optimization. J. Optim. Theory Appl. 168(1), 172–197 (2016)

    MathSciNet  Google Scholar 

  43. Su, Y., Wang, Q., Sun, C.: Distributed primal-dual method for convex optimization with coupled constraints. IEEE Trans. Signal Process. 70, 523–535 (2022)

    MathSciNet  Google Scholar 

  44. Tong, X., Feng, Y., Zhao, A.: A survey on Neyman–Pearson classification and suggestions for future research. WIREs Comput. Stat. 8(2), 64–81 (2016)

    MathSciNet  Google Scholar 

  45. Wiltz, A., Chen, F., Dimos, V.D.: A consistency constraint-based approach to coupled state constraints in distributed model predictive control. In: 2022 IEEE 61st Conference on Decision and Control (CDC) pp. 3959–3964 (2022)

  46. Workbench team C: A marketing dataset (2008). http://www.causality.inf.ethz.ch/data/CINA.html

  47. Wu, X., Wang, H., Lu, J.: Distributed optimization with coupling constraints. IEEE Trans. Autom. Control 68(3), 1847–1854 (2023)

    MathSciNet  Google Scholar 

  48. Xu, J., Tian, Y., Sun, Y., Scutari, G.: Distributed algorithms for composite optimization: unified framework and convergence analysis. IEEE Trans. Signal Process. 69, 3555–3570 (2021)

    MathSciNet  Google Scholar 

  49. Xu, J., Zhu, S., Soh, Y.C., Xie, L.: Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant stepsizes. In: 2015 54th IEEE Conference on Decision and Control (CDC), pp. 2055–2060 (2015)

  50. Yuan, D., Proutiere, A., Shi, G.: Distributed online linear regressions. IEEE Trans. Inf. Theory 67(1), 616–639 (2021)

    MathSciNet  Google Scholar 

  51. Zhou, X., Ma, Z., Zou, S., Zhang, J.: Consensus-based distributed economic dispatch for multi micro energy grid systems under coupled carbon emissions. Appl. Energy 324, 119641 (2022)

    Google Scholar 

  52. Zhu, K., Tang, Y.: Primal-dual \(\varepsilon \)-subgradient method for distributed optimization. J. Syst. Sci. Complex. 36, 577–590 (2020)

    MathSciNet  Google Scholar 

  53. Zhu, M., Martinez, S.: On distributed convex optimization under inequality and equality constraints. IEEE Trans. Autom. Control 57(1), 151–164 (2012)

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kai Gong.

Additional information

Communicated by Jalal M. Fadili.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Liwei Zhang: This author was supported by the Natural Science Foundation of China (Nos. 11971089 and 11731013) and partially supported by Dalian High-level Talent Innovation Project (No. 2020RD09).

Appendices

Proof of Lemma 4.1

For simplicity, denote \(\delta _k=V_{k+1}^{-1}\left( \nabla \textrm{L}(\omega ^{k+1})-\nabla \textrm{L}(\omega ^k)\right) \), (4.5) can be equivalently written as \(s^{k+1}=\mathcal {R}_ks^k+\delta _k\). Furthermore, \(s^{k+1}=\varPhi _\mathcal {R}(k,1)s^1+\sum _{l=1}^{k-1}\varPhi _\mathcal {R}(k,l+1)\delta _l+\delta _k\). Since \(\{\frac{1}{2m}v_k\}\) is an absolute probability sequence for the matrix sequence \(\{\mathcal {R}_k\}\), it follows that

$$\begin{aligned} 1_{2m}\bar{s}^{k+1}=&\frac{1}{2m}1_{2m}v_{k+1}^\top s^{k+1}\\ =&\frac{1}{2m}1_{2m}v_{k+1}^\top \left( \varPhi _\mathcal {R}(k,1)s^1+\delta _k\right) +\frac{1}{2m}1_{2m}v_{k+1}^\top \sum _{l=1}^{k-1}\varPhi _\mathcal {R}(k,l+1)\delta _l\\ =&\frac{1}{2m}1_{2m}v_1^\top s^1+\frac{1}{2m}\sum _{l=1}^{k-1}1_{2m}v_l^\top \delta _l +\frac{1}{2m}1_{2m}v_{k+1}^\top \delta _k. \end{aligned}$$

Applying the subadditivity of \(\Vert \cdot \Vert \) yields

$$\begin{aligned} \begin{aligned} \Vert s^{k+1}-1_{2m}\bar{s}^{k+1}\Vert \le&\left\| \left( \varPhi _\mathcal {R}(k,1)-\frac{1}{2m}1_{2m}v_1^\top \right) s^1\right\| +\left\| \left( I_{2m}-\frac{1}{2m}1_{2m}v_{k+1}^\top \right) \delta _k\right\| \\&\quad +\sum _{l=1}^{k-1}\left\| \left( \varPhi _\mathcal {R}(k,l+1)-\frac{1}{2m}1_{2m}v_l^\top \right) \delta _l\right\| . \end{aligned} \end{aligned}$$
(A.1)

For the matrix sequence \(\{\varPhi _\mathcal {R}(k,s)\}\), it holds the similar results in Lemma C.3, i.e.,

$$\begin{aligned} \left\| \varPhi _\mathcal {R}(k,s)-\frac{1}{2m}1_{2m}v_s^\top \right\| \le C_1\rho ^{k-s}, \quad \forall \, k\ge s, \end{aligned}$$

where \(\rho \in (0,1)\) and \(C_1\) is some positive constant. Due to the compactness of the subset \(\mathcal {X}\), it is clear that \(\delta _k\) is bounded. Denote the upper bound of \(\Vert \delta _k\Vert \) by \(\widetilde{C}\). Therefore,

$$\begin{aligned} \Vert s^{k+1}-1_{2m}\bar{s}^{k+1}\Vert&\le \rho ^{k-1}\Vert s^1\Vert +\widetilde{C}\sum _{l=1}^{k-1}\rho ^{k-1-l}+\widetilde{C}\\&\le \Vert s^1\Vert +\frac{\widetilde{C}}{1-\rho }+\widetilde{C}=M,\quad \forall \, k\ge 1. \end{aligned}$$

Proof of Lemma 4.2

Firstly, we need to verify the following inequality,

$$\begin{aligned} \left\| \omega ^{k+1}-1_{2m}\bar{\omega }^{k+1}\right\| \le C_2\beta ^{k-1}+C_3\sum _{l=1}^{k-1}\beta ^{k-1-l}\alpha _l+C_4\alpha _k. \end{aligned}$$

Then, combining the conditions on the step-size sequence \(\{\alpha _k\}\) in Assumption 3.1 and the results of Lemma C.4, it is easy to demonstrate the claimed results.

(a): From (4.4), there exist \(e^k\) such that \(\omega ^{k+1}=\mathcal {A}_k\omega ^k-\alpha _kV_ks^k+e^k\). Similar to the inequality (A.1), it holds that

$$\begin{aligned} \begin{aligned}&\left\| \omega ^{k+1}-1_{2m}\bar{\omega }^{k+1}\right\| \\&\quad \le \left\| \left( \varPhi _\mathcal {A}(k,1)-1_{2m}\phi _1^\top \right) \omega ^1\right\| +\left\| \left( I_{2m}-1_{2m}\phi _k^\top \right) \left( e^k-\alpha _kV_ks^k\right) \right\| \\&\qquad +\sum _{l=1}^{k-1}\left\| \left( \varPhi _\mathcal {A}(k,l+1)-1_{2m}\phi _l^\top \right) \left( e^l-\alpha _lV_ls^l\right) \right\| . \end{aligned} \end{aligned}$$
(B.1)

Since \(\omega ^{k+1}\) is the projection of \(\mathcal {A}_k\omega ^k-\alpha _kV_ks^k\) onto \(\varOmega \), for any \(\omega \in \varOmega \), one has

$$\begin{aligned} \left\langle \mathcal {A}_k\omega ^k-\alpha _kV_ks^k-\omega ^{k+1},\omega -\omega ^{k+1}\right\rangle \le 0, \end{aligned}$$

which implies that

$$\begin{aligned}&\Vert \omega ^{k+1}-\mathcal {A}_k\omega ^k\Vert ^2 =\Vert e^k-\alpha _kV_ks^k\Vert ^2\\&\quad =\Vert e^k\Vert ^2-2\left\langle \alpha _kV_ks^k,e^k\right\rangle +\Vert \alpha _kV_ks^k\Vert ^2\\&\quad =\Vert e^k\Vert ^2+2\left\langle \omega ^{k+1}-\mathcal {A}_k\omega ^k-e^k,e^k\right\rangle +\Vert \alpha _kV_ks^k\Vert ^2\\&\quad \le \alpha _k^2\Vert V_ks^k\Vert ^2-\Vert e^k\Vert ^2. \end{aligned}$$

From Lemma 4.1, it follows that

$$\begin{aligned} \Vert e^k\Vert \le \alpha _k\Vert V_ks^k\Vert \le \alpha _k\Vert s^k\Vert \le M\alpha _k, \quad \forall \, k\ge 0. \end{aligned}$$

In view of the results in Lemmas C.3 and C.4, one can get a modified version of the inequality (B.1) as follows,

$$\begin{aligned} \Vert \omega ^{k+1}-1_{2m}\bar{\omega }^{k+1}\Vert \le C\Vert \omega ^1\Vert \beta ^{k-1}+2M\alpha _k+ 2M\sum _{l=1}^{k-1}\beta ^{k-1-l}\alpha _l. \end{aligned}$$
(B.2)

Under Assumption 3.1 and Lemma C.4 (a), it’s clear that

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert \omega ^k-1_{2m}\bar{\omega }^k\Vert =0. \end{aligned}$$

(b): Multiplying the inequality (B.2) by \(\alpha _{k+1}\) at the both sides yields

$$\begin{aligned}&\alpha _{k+1}\Vert \omega ^{k+1}-1_{2m}\bar{\omega }^{k+1}\Vert \\&\quad \le C_2\beta ^{k-1}\alpha _{k+1}+2C_3\sum _{l=1}^{k-1} \beta ^{k-1-l}\alpha _l\alpha _{k+1}+C_4\alpha _k\alpha _{k+1}\\&\quad \le C_2\left( \beta ^{2(k-1)}+\alpha _{k+1}^2\right) + C_3\sum _{l=1}^{k-1}\beta ^{k-1-l}\left( \alpha _l^2+\alpha _{k+1}^2\right) +C_4\left( \alpha _k^2+\alpha _{k+1}^2\right) \\&\quad \le C_1\beta ^{2(k-1)}+C_2\sum _{l=1}^{k-1}\beta ^{k-1-l}\alpha _l^2+C_4\alpha _k^2+C_5\alpha _{k+1}^2, \end{aligned}$$

where \(C_2, C_3, C_4, C_5\) are some positive constants. Once again, under Assumption 3.1 and Lemma C.4 (b), it is concluded that

$$\begin{aligned} \sum _{k=0}^\infty \alpha _k\Vert \omega ^k-1_{2m}\bar{\omega }^k\Vert <\infty . \end{aligned}$$

In Lemma 4.2, the claimed results about the sequence \(\{s^k\}\) is parallel with \(\{\omega ^k\}\) and its proof is omitted here.

Supporting Lemmas

Lemma C.1

([29], Lemma 4) Under Assumptions 2.1 and 2.2, it holds that

  1. (a)

    \(\lim _{k\rightarrow \infty }\varPhi _A(k,s)=1_{2m}\mu _s^\top \), where \(\mu _s\in \mathbb {R}^{2m}\) is a stochastic vector for each s.

  2. (b)

    For any i, the entries \(\varPhi _\mathcal {A}(k,s)_{ij}, j=1,\dots ,2m\), converge to the same limit \((\mu _s)_i\) as \(k\rightarrow \infty \) with a geometric rate, i.e., for each i and \(s\ge 0\),

    $$\begin{aligned} \left| \varPhi _\mathcal {A}(k,s)_{ij}-\left( \mu _s\right) _i\right| \le C\beta ^{k-s},\ \forall \, k\ge s, \end{aligned}$$

    where \(C=2\frac{1+2\eta ^{-B_0}}{1-2\eta ^{B_0}}\left( 1-\eta \right) ^{1/{B_0}}\), \(\beta =(1-\eta ^{B_0})\in (0,1)\), \(\eta \) is the lower bound of Assumption 2.2, \(B_0=(m-1)B\), ingeter B is defined by Assumption 2.1.

Lemma C.2

([39], Corollary 1) Under the assumptions of Lemma C.1, the sequence \(\{\phi _k\}\) is an absolute probability sequence for the matrix sequence \(\{\mathcal {A}_k\}\), where

$$\begin{aligned} \begin{aligned} \phi _k^\top&=\mu _s^\top ,\quad k=sB,\\ \phi _k^\top&=\mu _{s+1}^\top \mathcal {A}_{(s+1)B-1}\cdots \mathcal {A}_k,\quad k\in \left( sB,(s+1)B\right) ,\\ \text {for} \ \ s&=0,1,2,\cdots . \end{aligned} \end{aligned}$$
(C.1)

Lemma C.3

([34], Lemma 11, Chapter 2.2) Let \(\{b_k\}, \{c_k\}, \{d_k\}\) be non-negative sequences. Suppose that \(\sum _{k=0}^\infty c_k<\infty \) and

$$\begin{aligned} b_{k+1}\le b_k-d_k+c_k,\quad \forall \, k\ge 1, \end{aligned}$$

then the sequence \(\{b_k\}\) converges and \(\sum _{k=0}^\infty d_k<\infty \).

Lemma C.4

([30], Lemma 7) Let \(\beta \in (0,1)\), and \(\{\gamma _k\}\) be a positive scalar sequence.

  1. (a)

    If \(\lim _{k\rightarrow \infty }\gamma _k=0\), then \(\lim _{k\rightarrow \infty }\sum _{l=0}^k\beta ^{k-l}\gamma _l=0\).

  2. (b)

    Furthermore, if \(\sum _{k=0}^\infty \gamma _k<\infty \), then \(\sum _{k=0}^\infty \sum _{l=0}^k\beta ^{k-l}\gamma _l<\infty \).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gong, K., Zhang, L. Primal-Dual Algorithm for Distributed Optimization with Coupled Constraints. J Optim Theory Appl 201, 252–279 (2024). https://doi.org/10.1007/s10957-024-02393-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-024-02393-7

Keywords

Navigation