Skip to main content
Log in

Differentially private multi-agent constraint optimization

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Distributed constraint optimization (DCOP) is a framework in which multiple agents with private constraints (or preferences) cooperate to achieve a common goal optimally. DCOPs are applicable in several multi-agent coordination/allocation problems, such as vehicle routing, radio frequency assignments, and distributed scheduling of meetings. However, optimization scenarios may involve multiple agents wanting to protect their preferences’ privacy. Researchers propose privacy-preserving algorithms for DCOPs that provide improved privacy protection through cryptographic primitives such as partial homomorphic encryption, secret-sharing, and secure multiparty computation. These privacy benefits come at the expense of high computational complexity. Moreover, such an approach does not constitute a rigorous privacy guarantee for optimization outcomes, as the result of the computation may compromise agents’ preferences. In this work, we show how to achieve privacy, specifically Differential Privacy, by randomizing the solving process. In particular, we present P-Gibbs, which adapts the current state-of-the-art algorithm for DCOPs, namely SD-Gibbs, to obtain differential privacy guarantees with much higher computational efficiency. Experiments on benchmark problems such as Ising, graph-coloring, and meeting-scheduling show P-Gibbs’ privacy and performance trade-off for varying privacy budgets and the SD-Gibbs algorithm. More concretely, we empirically show that P-Gibbs provides fair solutions for competitive privacy budgets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Availability of data and materials

The implementation is adopted from pyDCOP [53] available at github.com/Orange-OpenSource/pyDcop. We perform our experiments on synthetic data generated using pyDCOP’s command line tool. Our codebase is available at: github.com/magnetar-iiith/PGiBBS.

Notes

  1. We can also define a DCOP that minimizes the total utility, i.e., \(F(\textbf{X}^*) = \text{ max}_{\textbf{X}\in \mathcal {D}} -F(\textbf{X})\).

  2. If any agent has a zero utility for some value, then all agents must have zero utility, and w.l.o.g., we can simply exclude such values from all domains.

  3. For e.g., DPOP, a non-private, complete algorithm timed-out after 24 h of computing (i) an Ising instance with 10 variables, (ii) a graph-coloring instance with 12 variables and \(|D|=8\), (iii) a meeting-scheduling instance with 25 variables and \(|D|=20\). For details, refer to Appendix 1.

  4. We remark that this behavior is different from the Distributed Simulated Annealing (DSAN) algorithm for DCOPs [54, 55]. DSAN is an iterative optimization algorithm with a temperature parameter that aims to control the likelihood of accepting worse solutions. DSAN consists of an annealing schedule that determines the change in the temperature parameter over time. As the parameter decreases, DSAN becomes more selective and explores the solution space more effectively. Instead of selecting the next assignment through a specific, utility-based distribution like in SD-Gibbs (Eq. 2), in DSAN, an agent randomly chooses its next assignment. E.g., by uniform sampling or by swapping values with neighboring agents. DSAN is neither complete nor private.

  5. We omit Ising from this set of experiments, as Ising instances with \(> 20\) agents ran out of memory during execution.

  6. JSD [58] is a statistical method to measure the similarity of two probability distributions. It is based on KL-divergence, but does not require the same support for the distributions.

  7. Compared to the instances created in Sect. 6, we only scale down the number of variables and the domain size while keeping the nature of the constraints the same.

References

  1. Rosser, M. (2003). Basic Mathematics for Economists. Routledge.

    Book  Google Scholar 

  2. Yokoo, M., Durfee, E. H., Ishida, T., & Kuwabara, K. (1998). The distributed constraint satisfaction problem: formalization and algorithms. IEEE Transactions on Knowledge and Data Engineering, 10(5), 673–685.

    Article  Google Scholar 

  3. Faltings, B., Léauté, T., & Petcu, A. (2008). Privacy guarantees through distributed constraint satisfaction. In 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (Vol. 2, pp. 350–358). IEEE.

  4. Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K., & Zhang, L.(2016) Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on CCS (pp. 308–318).

  5. Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., & Seth, K. (2017) Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175–1191).

  6. Huai, M., 0015, D.W., Miao, C., Xu, J., & Zhang, A. (2019) Privacy-aware synthesizing for crowdsourced data. In IJCAI (pp. 2542–2548).

  7. Modi, P. J., Shen, W.-M., Tambe, M., & Yokoo, M. (2005). Adopt: Asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence, 161(1–2), 149–180.

    Article  MathSciNet  Google Scholar 

  8. Hirayama, K., & Yokoo, M. (1997) Distributed partial constraint satisfaction problem. InInternational Conference on Principles and Practice of Constraint Programming (pp. 222–236). Springer

  9. Gershman, A., Meisels, A., & Zivan, R. (2009). Asynchronous forward bounding for distributed cops. Journal of Artificial Intelligence Research, 34, 61–88.

    Article  MathSciNet  Google Scholar 

  10. Maheswaran, R.T., Pearce, J.P., & Tambe, M. (2006) In Scerri, P., Vincent, R., Mailler, R. (eds.) A Family of Graphical-Game-Based Algorithms for Distributed Constraint Optimization Problems (pp. 127–146). Springer.

  11. Petcu, A., & Faltings, B. (2005) DPOP: A scalable method for multiagent constraint optimization. InIJCAI 05 (pp. 266–271).

  12. Farinelli, A., Rogers, A., Petcu, A., & Jennings, N.R. (2008) Decentralised coordination of low-power embedded devices using the Max-Sum algorithm. In AAMAS (pp. 639–646).

  13. Ottens, B., Dimitrakakis, C., & Faltings, B. (2012) DUCT: An upper confidence bound approach to distributed constraint optimization problems. InAAAI (pp. 528–534).

  14. Ottens, B., Dimitrakakis, C., & Faltings, B. (2017). DUCT: An upper confidence bound approach to distributed constraint optimization problems. ACM Transactions on Intelligent Systems and Technology (TIST), 8(5), 1–27.

    Article  Google Scholar 

  15. Fioretto, F., Pontelli, E., & Yeoh, W. (2018). Distributed constraint optimization problems and applications: A survey. Journal of Artificial Intelligence Research, 61, 623–698.

    Article  MathSciNet  Google Scholar 

  16. Nguyen, D. T., Yeoh, W., Lau, H. C., & Zivan, R. (2019). Distributed gibbs: A linear-space sampling-based DCOP algorithm. Journal of Artificial Intelligence Research, 64, 705–748.

    Article  MathSciNet  Google Scholar 

  17. Léauté, T., & Faltings, B. (2013). Protecting privacy through distributed computation in multi-agent decision making. Journal of Artificial Intelligence Research, 47, 649–695.

    Article  MathSciNet  Google Scholar 

  18. Tassa, T., Grinshpoun, T., & Zivan, R. (2017). Privacy preserving implementation of the Max-Sum algorithm and its variants. Journal of Artificial Intelligence Research, 59, 311–349.

    Article  MathSciNet  Google Scholar 

  19. Grinshpoun, T., & Tassa, T. (2016). P-SyncBB: A privacy preserving branch and bound DCOP algorithm. Journal of Artificial Intelligence Research, 57, 621–660.

    Article  MathSciNet  Google Scholar 

  20. Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference (pp. 265–284). Springer.

  21. Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Theoretical Computer Science, 9(3–4), 211–407.

    MathSciNet  Google Scholar 

  22. Liao, J. (1998). Variance reduction in Gibbs sampler using quasi random numbers. Journal of Computational and Graphical Statistics, 7(3), 253–266.

    Google Scholar 

  23. Cerquides, J., Rodríguez-Aguilar, J. A., Emonet, R., & Picard, G. (2021). Solving highly cyclic distributed optimization problems without busting the bank: A decimation-based approach. Logic Journal of the IGPL, 29(1), 72–95.

    Article  MathSciNet  Google Scholar 

  24. Maheswaran, R., Tambe, M., Bowring, E., Pearce, J., & Varakantham, P. (2004). Taking DCOP to the real world: Efficient complete solutions for distributed event scheduling. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems (pp 310–317).

  25. Fischer, W., & Muller, B. W. (1991) Method and apparatus for the manufacture of a product having a substance embedded in a carrier. Google Patents. US Patent 5,043,280.

  26. Gelly, S., & Silver, D. (2007) Combining online and offline knowledge in UCT. In Proceedings of the 24th International Conference on Machine Learning (pp. 273–280).

  27. Grinshpoun, T., Tassa, T., Levit, V., & Zivan, R. (2019). Privacy preserving region optimal algorithms for symmetric and asymmetric DCOPs. Artificial Intelligence, 266, 27–50.

    Article  MathSciNet  Google Scholar 

  28. Tassa, T., Grinshpoun, T., & Yanai, A. (2021). PC-SyncBB: A privacy preserving collusion secure DCOP algorithm. Artificial Intelligence, 297, 103501.

    Article  MathSciNet  Google Scholar 

  29. Kogan, P., Tassa, T., & Grinshpoun, T. (2022) Privacy preserving DCOP solving by mediation. In Cyber Security, Cryptology, and Machine Learning - 6th International Symposium, CSCML. Lecture Notes in Computer Science (Vol. 13301, pp. 487–498).

  30. Yao, A.C. (1982) Protocols for secure computations. In 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982) (pp. 160–164). IEEE

  31. Katagishi, H., & Pearce, J. P. (2007) KOPT: Distributed DCOP algorithm for arbitrary k-optima with monotonically increasing utility. DCR-07

  32. Kiekintveld, C., Yin, Z., Kumar, A., & Tambe, M. (2010) Asynchronous algorithms for approximate distributed constraint optimization with quality bounds. In AAMAS (vol. 10, pp. 133–140).

  33. Maheswaran, R. T., Pearce, J. P., Bowring, E., Varakantham, P., & Tambe, M. (2006). Privacy loss in distributed constraint reasoning: A quantitative framework for analysis and its applications. Autonomous Agents and Multi-Agent Systems, 13(1), 27–60.

    Article  Google Scholar 

  34. Brito, I., Meisels, A., Meseguer, P., & Zivan, R. (2009). Distributed constraint satisfaction with partially known constraints. Constraints, 14(2), 199–234.

    Article  MathSciNet  Google Scholar 

  35. Grinshpoun, T., Grubshtein, A., Zivan, R., Netzer, A., & Meisels, A. (2013). Asymmetric distributed constraint optimization problems. Journal of Artificial Intelligence Research, 47, 613–647.

    Article  MathSciNet  Google Scholar 

  36. Savaux, J., Vion, J., Piechowiak, S., Mandiau, R., Matsui, T., Hirayama, K., Yokoo, M., Elmane, S., & Silaghi, M. (2017) Utilitarian approach to privacy in distributed constraint optimization problems. InFLAIRS Conference (pp. 454–459).

  37. Savaux, J., Vion, J., Piechowiak, S., Mandiau, R., Matsui, T., Hirayama, K., Yokoo, M., Elmane, S., & Silaghi, M. (2020). Privacy stochastic games in distributed constraint reasoning. Annals of Mathematics and Artificial Intelligence, 88, 691–715.

    Article  MathSciNet  Google Scholar 

  38. Yokoo, M., Etzioni, O., Ishida, T., & Jennings, N. (2001). Distributed constraint satisfaction: Foundations of cooperation in multi-agent systems. Springer.

    Book  Google Scholar 

  39. Hamadi, Y., Bessiere, C., & Quinqueton, J. (1998) Distributed intelligent backtracking. In ECAI (pp. 219–223).

  40. Dwork, C. (2006) Differential privacy. In 33rd International Colloquium on Automata, Languages and Programming, Part II (ICALP 2006). Lecture Notes in Computer Science (vol. 4052, pp. 1–12).

  41. Rubinstein, B. I., Bartlett, P. L., Huang, L., & Taft, N. (2009) Learning in a large function space: Privacy-preserving mechanisms for SVM learning. arXiv preprint arXiv:0911.5708

  42. Chaudhuri, K., Sarwate, A., & Sinha, K. (2012) Near-optimal differentially private principal components. Advances in neural information processing systems 25

  43. Basu, D., Dimitrakakis, C., & Tossou, A. (2019) Differential privacy for multi-armed bandits: What is it and what is its cost? arXiv preprint arXiv:1905.12298

  44. Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I., & Talwar, K.(2017) Semi-supervised knowledge transfer for deep learning from private training data. In ICLR. openreview.net/forum?id=HkwoSDPgg

  45. Roth, A. (2012). Buying private data at auction: the sensitive surveyor’s problem. ACM SIGecom Exchanges, 11(1), 1–8.

    Article  Google Scholar 

  46. Pai, M. M., & Roth, A. (2013). Privacy and mechanism design. ACM SIGecom Exchanges, 12(1), 8–29.

    Article  Google Scholar 

  47. McSherry, F., & Talwar, K. (2007) Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07) (pp. 94–103). IEEE.

  48. Huang, Z., Mitra, S., & Vaidya, N. (2015) Differentially private distributed optimization. In Proceedings of the 2015 International Conference on Distributed Computing and Networking (pp. 1–10).

  49. Triastcyn, A., & Faltings, B. (2020) Bayesian differential privacy for machine learning. In Proceedings of the 37th International Conference on Machine Learning.

  50. Rényi, A. (1961) On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics (vol. 4, pp. 547–562). University of California Press.

  51. Balle, B., Barthe, G., & Gaboardi, M. (2018) Privacy amplification by subsampling: Tight analyses via couplings and divergences. Advances in Neural Information Processing Systems 31

  52. Aardal, K. I., Van Hoesel, S. P., Koster, A. M., Mannino, C., & Sassano, A. (2007). Models and solution techniques for frequency assignment problems. Annals of Operations Research, 153(1), 79–129.

    Article  MathSciNet  Google Scholar 

  53. Rust, P., Picard, G., & Ramparany, F. (2019) pyDCOP: A DCOP library for dynamic IoT systems. In International Workshop on Optimisation in Multi-Agent Systems.

  54. Arshad, M., & Silaghi, M.C. (2004). Distributed simulated annealing. Distributed constraint problem solving and reasoning in multi-agent systems 112

  55. Chapman, A. C., Rogers, A., & Jennings, N. R. (2011). Benchmarking hybrid algorithms for distributed constraint optimisation games. Autonomous Agents and Multi-agent Systems, 22, 385–414.

    Article  Google Scholar 

  56. Kirkpatrick, S., Gelatt, C. D., Jr., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680.

    Article  ADS  MathSciNet  CAS  PubMed  Google Scholar 

  57. Papernot, N., & Steinke, T. (2021) Hyperparameter tuning with Renyi differential privacy. arXiv preprint arXiv:2110.03620

  58. Liese, F., & Vajda, I. (2018) Convex statistical distances. Statistical Inference for Engineers and Data Scientists

  59. Damle, S., Triastcyn, A., Faltings, B., & Gujar, S. (2021) Differentially private multi-agent constraint optimization. In IEEE/WIC/ACM WI-IAT (pp. 422–429).

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the problem formulation and the design of the algorithm. SD and AT contributed to the theoretical proofs. SD worked on the algorithm’s implementation and prepared the figures. SD and AT wrote the paper. All authors reviewed the manuscript.

Corresponding author

Correspondence to Sankarshan Damle.

Ethics declarations

Conflict of interest

Authors have no competing interests as defined by Springer.

Links to Own Prior Work

Some text passages of this manuscript (e.g., preliminaries) have been drawn from our prior work published as a conference paper [59]. The current manuscript differs from the conference paper as follows: We provide a concrete example to highlight the privacy leak in SD-Gibbs (the state-of-the-art DCOP algorithm). We introduce a novel privacy metric, namely solution privacy, to study the additional information leak in privacy-preserving DCOP algorithms. In [59], we only provide proof sketches. The current manuscript provides formal proofs for each result presented. We provide an additional set of experiments, including (i) an additional benchmark and (ii) concerning P-Gibbs’ hyperparameters to study the specific impact of each hyperparameter on the quality of P-Gibbs’ solution and the privacy budget. We introduce a novel metric, namely assignment distance, to explain the privacy protection in P-Gibbs compared to SD-Gibbs.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1 Comparing P-Gibbs’ quality of solution with DPOP

Complete algorithms like DPOP fail to solve sufficiently large problems. More concretely, in our experiments, the pyDCOP’s DPOP solver timed out after 24 h for (i) an Ising benchmark instance with 10 variables/agents, (ii) a graph-coloring benchmark instance with 12 variables/agents and \(|D|=8\), (iii) a meeting-scheduling benchmark instance with 25 variables/agents and \(|D|=30\). Recall that in Sect. 6, we conduct experiments on significantly larger problems than these.

As private algorithms built atop DPOP (i.e., P-DPOP, P\(^{3/2}\)-DPOP and P\(^2\)-DPOP) use computationally expensive cryptographic primitives, these algorithms are less efficient than DPOP [3, 17]. Given this evidence, we conclude that P-Gibbs is significantly more scalable than the DPOP family of algorithms.

Fig. 6
figure 6

The average and standard deviation of P-Gibbs’ Solution Quality (SQ) for different privacy budgets, compared to DPOP. Note that, the case with \(\epsilon =0.046\) corresponds to P-Gibbs\(_\infty\) as \(\gamma =\infty\)

Next, we want to compare the quality of solutions given by the DPOP family of algorithms and P-Gibbs. As each of P-DPOP, P\(^{3/2}\)-DPOP, and P\(^2\)-DPOP do not any noise/randomness to the computation process, it suffices to compare the solution quality of DPOP and SD-Gibbs (refer Definition 6). We begin by setting up the benchmark problems.

1.1 Solution Quality

Benchmarks. As in Sect. 6, the instances are created using pyDCOP’s generate option.Footnote 7

  1. 1.

    Ising [23]. We generate 5 sample Ising problems with variables/agents between [5, 8] and \(D_i=\{0,1\},\forall i\). The constraints are of two types: (i) binary constraints whose strength is sampled from \(\mathcal {U}[\beta ,\beta ]\) where \(\beta \in [1,10)\) and (ii) unary constraints whose strength is sampled from \(\mathcal {U}[-\rho ,\rho ]\) where \(\rho \in [0.05,0.9)\). Ising is a minimization problem.

  2. 2.

    Graph-coloring (GC). We generate 5 sample graph-coloring problems. The problems are such that the number of agents/variables lies between [8, 12] and agents’ domain size between [10, 20). Each constraint is a random integer taken from (0, 10). Graph-coloring is a minimization problem.

  3. 3.

    Meeting-scheduling (MS). We generate 5 sample meeting-scheduling problems. The problem instances are such that the number of agents and variables lies between [15, 20] with the number of slots, i.e., the domain for each agent randomly chosen from [20, 30]. Each constraint is a random integer taken from (0, 100), while each meeting may randomly occupy [1, 5] slots. Meeting-scheduling is a maximization problem.

Results. Fig. 6 depicts the results. As previously observed in Fig. 4, P-Gibbs’ solution quality improves as the problem size (m) increases. As DPOP does not scale, we see that the solution quality remains around 50–75%, with an increase in the quality with an increase in \(\epsilon\). Furthermore, as P-DPOP, P\(^{3/2}\)-DPOP, and P\(^2\)-DPOP do not add noise/randomness in the solution process, we argue that these results, although hold for them.

Appendix 2 Comparing P-Gibbs’ quality of solution with max-sum

Fig. 7
figure 7

The average and standard deviation of P-Gibbs’ Solution Quality (SQ) for different privacy budgets, compared to Max-Sum. Note that, the case with \(\epsilon =0.046\) corresponds to P-Gibbs\(_\infty\) as \(\gamma =\infty\)

As P-Max-Sum perfectly simulates Max-Sum, i.e., preserves the solution of its underlying non-private counterpart [18, Theorem 4.1], we now compare P-Gibbs’ quality of solution with that of Max-Sum. We begin by generating the benchmark instances.

Benchmarks. As in Sect. 6, the instances are created using pyDCOP’s generate option. We use similarly sized problems as in the experiments in Sect. 6.

  1. 1.

    Ising [23]. We generate 5 sample Ising problems with variables/agents between [10, 20) and \(D_i=\{0,1\},\forall i\). The constraints are of two types: (i) binary constraints whose strength is sampled from \(\mathcal {U}[\beta ,\beta ]\) where \(\beta \in [1,10)\) and (ii) unary constraints whose strength is sampled from \(\mathcal {U}[-\rho ,\rho ]\) where \(\rho \in [0.05,0.9)\). Ising is a minimization problem.

  2. 2.

    Graph-coloring (GC). We generate 5 sample graph-coloring problems. The problems are such that the number of agents/variables lies between [50, 100] and agents’ domain size between [10, 20). Each constraint is a random integer taken from (0, 10). Graph-coloring is a minimization problem.

We omitted meeting-scheduling from this set of experiments, as Max-Sum did not perform

Results. For both Ising and GC, the solution quality remains \(> 1\). That is, P-Gibbs consistently outputs better solutions than Max-Sum. The quality also improves as the privacy budget, \(\epsilon\), increases from 0.046 to 9.55.

Runtime. Here, we show that while P-Max-Sum perfectly simulates Max-Sum, it does so with a significant computational overhead. More concretely, from [18, Section 6.2], we know that P-Max-Sum’s computational overhead (compared to Max-Sum), for any iteration and each node is quadratic in the domain size. E.g., from [18, Section 6.7], for random graphs, the runtime increases from 100 s for \(|D|=3\), to 242 s for \(|D|=5\) and 450 s for \(|D|=7\).

With this, we conclude that P-Gibbs’ performance is comparable to Max-Sum (Fig. 7) and, importantly, without the significant computational overhead of P-Max-Sum.

Comparing P-Gibbs, P-RODA and P-Max-Sum. From Table 1, P-RODA [27] satisfies topology, constraint, and decision privacy. In terms of performance, P-RODA finds better quality solutions than P-Max-Sum [27]. With Fig. 7, we also see that P-Gibbs provides better quality of solution than Max-Sum (and, consequently, P-Max-Sum). Given these observations, we believe that the solutions of P-Gibbs and P-RODA may be comparable.

Furthermore, our differentially private variant is significantly less computationally expensive than P-RODA. For instance, in [27, Figure 8], we see that there is a non-linear increase in P-RODA’s runtime with an increase in the domain size. In fact, P-RODA takes (a minimum of) \(\approx 200\) seconds for an iteration when the domain size is 25 [27, Figure 8]. In contrast, P-Gibbs takes \(\approx\) 300 and 25 s to complete 50 iterations for randomly generated instances of graph-coloring and meeting-scheduling for the same domain size and 100 agents.

Appendix 3 Explaining P-Gibbs’privacy protection for varying \(\epsilon\)

To measure the proximity of the assignments between the maximum distribution values, we introduce the metric: Assignment Proximity. The critical difference between Assignment (AD) Distance and Assignment Proximity (AP) is that while AD compares the distance between the overall assignment distribution, AP compares the distance between the most probable assignment and a random assignment.

1.1 Assignment Proximity

Consider the following definition.

Definition 8

(Assignment Proximity (AP\(_A\))) We define Assignment Proximity (AP\(_A\)) of a DCOP algorithm A as the L2-distance between the vector of the most probable assignment for each variable across l runs with the vector of the random assignment. Formally,

$$\begin{aligned} AP_A = \left\| \left( \frac{x_{f_i}:\texttt {frequent}(x_i^1,\ldots ,x_i^l)}{l}\right) _{i\in [p]}-\frac{1}{|D|}\cdot \mathbbm {1}_p\right\| _2 \end{aligned}$$
(14)

where \(\mathbbm {1}_p\) is a p-dimensional unit vector, \(x_i^k\) is the final assignment of variable \(x_i\) in the \(k\in [l]\) run, \(\texttt {frequent}(\cdot )\) is a function which outputs the most frequently occurring value given an input vector and \(x_{f_i}\) is the most frequent assignment of variable \(x_i\).

The intuition behind introducing assignment proximity is that given AP\(_{\text{ SD-Gibbs }}\) and AP\(_{\text{ P-Gibbs }}\), one can compare the proximity of P-Gibbs’ assignment to a random assignment with that of SD-Gibbs. A greater value of AP\(_{\text{ SD-Gibbs }}\) will imply that with SD-Gibbs, each variable is being assigned a particular domain value with high probability, in turn encoding maximum information. In contrast, a lower value of AP\(_{\text{ P-Gibbs }}\) will imply that with P-Gibbs, each variable is being assigned a particular domain value with probability closer to random (i.e., 1/|D|). By comparing the assignment proximity values, we can explain the increased privacy of P-Gibbs.

1.2 Experimental Evaluation

To better visualize assignment proximity, we empirically derive the values of it for the two DCOP benchmarks, graph coloring and meeting scheduling.

Instance Setup. For graph coloring, we have 30 agents/variables with domain size 10 and each constraint between (0, 10]. For meeting scheduling, we have 30 agents/variables with domain size 20 and each constraint between \([-1000,10]{\setminus } \{0\}\).

Fig. 8
figure 8

Assignment Proximity (AP) for SD-Gibbs and P-Gibbs for different \(\epsilon\)s, with the corresponding Solution Quality (SQ) values. The lower the AP, the higher the proximity of an algorithm’s final assignment with a perfectly random assignment. Thus, an algorithm with lower AP encodes less information regarding an agent’s utility function, preserving constraint privacy

Results. We run both the benchmark instances \(l=20\) times and report the corresponding assignment proximity (AP) values in Fig. 8. From the table, observe that AP value for P-Gibbs is \(\approx 50\%\) less than that for SD-Gibbs. This shows that P-Gibbs’ final assignment encodes less information compared to SD-Gibbs’, in turn, better preserving constraint privacy. Moreover, as \(\epsilon\) increases, the decrease in the noise added and increase in subsampling probability results in an increase in AP values for P-Gibbs as the algorithm behaves more like SD-Gibbs.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Damle, S., Triastcyn, A., Faltings, B. et al. Differentially private multi-agent constraint optimization. Auton Agent Multi-Agent Syst 38, 8 (2024). https://doi.org/10.1007/s10458-024-09636-x

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10458-024-09636-x

Keywords

Navigation