Skip to main content
Log in

Communication complexity of byzantine agreement, revisited

  • Published:
Distributed Computing Aims and scope Submit manuscript

Abstract

As Byzantine Agreement (BA) protocols find application in large-scale decentralized cryptocurrencies, an increasingly important problem is to design BA protocols with improved communication complexity. A few existing works have shown how to achieve subquadratic BA under an adaptive adversary. Intriguingly, they all make a common relaxation about the adaptivity of the attacker, that is, if an honest node sends a message and then gets corrupted in some round, the adversary cannot erase the message that was already sent—henceforth we say that such an adversary cannot perform “after-the-fact removal”. By contrast, many (super-)quadratic BA protocols in the literature can tolerate after-the-fact removal. In this paper, we first prove that disallowing after-the-fact removal is necessary for achieving subquadratic-communication BA. Next, we show new subquadratic binary BA constructions (of course, assuming no after-the-fact removal) that achieve near-optimal resilience and expected constant rounds under standard cryptographic assumptions and a public-key infrastructure (PKI) in both synchronous and partially synchronous settings. In comparison, all known subquadratic protocols make additional strong assumptions such as random oracles or the ability of honest nodes to erase secrets from memory, and even with these strong assumptions, no prior work can achieve the above properties. Lastly, we show that some setup assumption is necessary for achieving subquadratic multicast-based BA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. If a node receives \({\lambda }/2\) \(\mathtt{Commit} \) messages but is not eligible to send a \(\mathtt{Terminate} \) message, it terminates without sending one.

  2. Here, we write the random coins \(\rho \) consumed by the algorithm \(\mathsf{P}_0\) explicitly, although in other places, without risk of ambiguity, we avoid writing the randomness explicitly consumed by randomized algorithms.

  3. Note that too many honest mining successes will not break the security properties of the protocol, and thus we do not consider this as a bad event.

References

  1. Abraham, I., Devadas, S., Dolev, D., Nayak, K., Ren, L.: Synchronous byzantine agreement with expected \(O(1)\) rounds, expected \(O(n^2)\) communication, and optimal resilience. In: Financial Crypto (2019)

  2. Attiya, H., Welch, J.: Distributed Computing: Fundamentals. Simulations and Advanced Topics. Wiley, New York (2004)

    Book  MATH  Google Scholar 

  3. Ben-Or, M.: Another advantage of free choice (extended abstract): completely asynchronous agreement protocols. In: PODC (1983)

  4. Bhangale, A., Liu-Zhang, C.D., Loss, J., Nayak, K.: Efficient adaptively-secure byzantine agreement for long messages. Cryptology ePrint Archive, Report 2021/1403 (2021). https://ia.cr/2021/1403

  5. Bitansky, N.: Verifiable random functions from non-interactive witness-indistinguishable proofs. In: Theory of Cryptography, pp. 567–594 (2017)

  6. Blum, E., Katz, J., Liu-Zhang, C., Loss, J.: Asynchronous byzantine agreement with subquadratic communication. In: R. Pass, K. Pietrzak (eds.) Theory of Cryptography—18th International Conference, TCC 2020, Durham, NC, USA, November 16-19, 2020, Proceedings, Part I, Lecture Notes in Computer Science, vol. 12550, pp. 353–380. Springer (2020). https://doi.org/10.1007/978-3-030-64375-1_13

  7. Cachin, C., Kursawe, K., Petzold, F., Shoup, V.: Secure and efficient asynchronous broadcast protocols. In: Annual International Cryptology Conference, pp. 524–541. Springer (2001)

  8. Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: FOCS (2001)

  9. Castro, M., Liskov, B.: Practical byzantine fault tolerance. In: OSDI (1999)

  10. Chandran, N., Chongchitmate, W., Garay, J.A., Goldwasser, S., Ostrovsky, R., Zikas, V.: The hidden graph model: communication locality and optimal resiliency with adaptive faults. In: ITCS (2015)

  11. Chen, J., Gorbunov, S., Micali, S., Vlachos, G.: Algorand agreement: super fast and partition resilient byzantine agreement. Cryptology ePrint Archive, Report 2018/377 (2018). https://ia.cr/2018/377

  12. Chen, J., Micali, S.: Algorand: the efficient and democratic ledger. arXiv:1607.01341 (2016)

  13. Cohen, R., Coretti, S., Garay, J., Zikas, V.: Probabilistic termination and composability of cryptographic protocols. In: the 36th Annual International Cryptology Conference on Advances in Cryptology—RYPTO 2016, pp. 240–269. Springer (2016)

  14. Cohen, S., Keidar, I., Spiegelman, A.: Brief announcement: Not a coincidence: Sub-quadratic asynchronous byzantine agreement whp. In: Proceedings of the 39th Symposium on Principles of Distributed Computing, PODC’20, pp. 175–177. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3382734.3405708

  15. David, B.M., Gazi, P., Kiayias, A., Russell, A.: Ouroboros praos: an adaptively-secure, semi-synchronous proof-of-stake blockchain. In: Eurocrypt (2018)

  16. Dolev, D., Reischuk, R.: Bounds on information exchange for byzantine agreement. J. ACM 32(1), 191–204 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  17. Dolev, D., Strong, H.R.: Authenticated algorithms for byzantine agreement. SIAM J. Comput. 12(4), 656–666 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  18. Dryja, T., Liu, Q.C., Narula, N.: A lower bound for byzantine agreement and consensus for adaptive adversaries using vdfs. CoRR. arXiv:2004.01939 (2020)

  19. Dwork, C., Lynch, N., Stockmeyer, L.: Consensus in the presence of partial synchrony. J. ACM 35, 288–323 (1988)

    Article  MathSciNet  Google Scholar 

  20. Feldman, P., Micali, S.: Optimal algorithms for byzantine agreement. In: Proceedings of the twentieth annual ACM symposium on Theory of computing, pp. 148–161. ACM (1988)

  21. Fischer, M.J., Lynch, N.A., Merritt, M.: Easy impossibility proofs for distributed consensus problems. In: PODC (1985)

  22. Fitzi, M.: Generalized communication and security models in byzantine agreement. Ph.D. thesis, ETH Zurich (2002)

  23. Garay, J.A., Katz, J., Kumaresan, R., Zhou, H.S.: Adaptively secure broadcast, revisited. In: Proceedings of the 30th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, PODC’11, pp. 179–186. ACM, New York, NY, USA (2011)

  24. Garay, J.A., Kiayias, A., Leonardos, N.: The bitcoin backbone protocol: analysis and applications. In: Eurocrypt (2015)

  25. Goyal, R., Hohenberger, S., Koppula, V., Waters, B.: A generic approach to constructing and proving verifiable random functions. In: TCC, vol. 10678, pp. 537–566. Springer (2017)

  26. Groth, J., Ostrovsky, R., Sahai, A.: New techniques for noninteractive zero-knowledge. J. ACM 59(3), 11:1-11:35 (2012). https://doi.org/10.1145/2220357.2220358

    Article  MathSciNet  MATH  Google Scholar 

  27. Hirt, M., Zikas, V.: Adaptively secure broadcast. In: EUROCRYPT, vol. 6110, pp. 466–485. Springer (2010)

  28. Katz, J., Koo, C.Y.: On expected constant-round protocols for byzantine agreement. J. Comput. Syst. Sci. 75(2), 91–112 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. King, V., Saia, J.: Breaking the \(O(N^2)\) bit barrier: scalable byzantine agreement with an adaptive adversary. J. ACM 58(4), 18:1-18:24 (2011)

    Article  MATH  Google Scholar 

  30. King, V., Saia, J., Sanwalani, V., Vee, E.: Scalable leader election. In: SODA (2006)

  31. Lamport, L.: The weak byzantine generals problem. J. ACM 30(3), 668–676 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  32. Lamport, L., Shostak, R., Pease, M.: The byzantine generals problem. ACM Trans. Program. Lang. Syst. 4(3), 382–401 (1982)

    Article  MATH  Google Scholar 

  33. Micali, S., Vadhan, S., Rabin, M.: Verifiable random functions. In: FOCS (1999)

  34. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system (2008)

  35. Rabin, M.O.: Randomized byzantine generals. In: Proceedings of the 24th Annual Symposium on Foundations of Computer Science, pp. 403–409. IEEE (1983)

  36. Tsimos, G., Loss, J., Papamanthou, C.: Gossiping for communication-efficient broadcast. Cryptology ePrint Archive, Report 2020/894 (2020). https://ia.cr/2020/894

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kartik Nayak.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is partially supported by The Federmann Cyber Security Center in conjunction with the Israel National Cyber Directorate. T.-H. Hubert Chan was partially supported by the Hong Kong RGC under the Grants 17200418 and 17201220.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pptx 37 KB)

Appendices

9. Additional details on modeling

1.1 9.1 Protocol execution

In the main body, we omitted some details of the execution model for ease of understanding. We now explain these details that will be relevant to the formal proofs.

An external party called the environment and denoted \({{\mathcal {Z}}} \) provides inputs to honest nodes and receives outputs from the honest nodes. As mentioned, the adversary, denoted \({\mathcal {A}} \), can adaptively corrupt nodes any time during the execution. All nodes that have been corrupt are under the control of \({\mathcal {A}} \), i.e., the messages they receive are forwarded to \({\mathcal {A}} \), and \({\mathcal {A}} \) controls what messages they will send once they become corrupt. The adversary \({\mathcal {A}} \) and the environment \({{\mathcal {Z}}} \) are allowed to freely exchange messages any time during the execution. Henceforth, we assume that all players as well as \({\mathcal {A}} \) and \({{\mathcal {Z}}} \) are non-uniform, probabilistic polynomial-time (\(\text {p.p.t.}\)) Interactive Turing Machines, and the execution is parametrized by a security parameter \(\kappa \) that is common knowledge to all players as well as \({\mathcal {A}} \) and \({{\mathcal {Z}}} \). Since the protocol execution is assumed to be probabilistic in nature. We would like to ensure that certain security properties such as consistency and liveness hold for almost all execution traces.

Notational conventions In our subsequent proofs we sometimes use the notation \(\textsf {view}\) to denote a randomly sampled execution. The randomness in the execution comes from honest nodes’ randomness, \(\mathcal {A}\), and \({\mathcal {Z}}\), and \(\textsf {view}\) is sometimes also referred to as an execution trace or a sample path. We would like that the fraction of sample paths that fail to satisfy relevant security properties be negligibly small in the security parameter \(\kappa \).

1.2 9.2 Ideal mining functionality \(\mathcal {F}_{\mathrm{mine}} \)

Earlier we used \(\mathcal {F}_{\mathrm{mine}}\) to describe our ideal-world protocols. For preciseness we now spell out the details of \(\mathcal {F}_{\mathrm{mine}}\).

Fig. 2
figure 2

The mining ideal functionality \(\mathcal {F}_{\mathrm{mine}}\)

\(\mathcal {F}_{\mathrm{mine}} \) ideal functionality As shown in Fig. 2, the \(\mathcal {F}_{\mathrm{mine}}\) ideal functionality has two activation points:

  • Whenever a node i calls \(\mathtt{mine}({\mathsf{m}})\) for the first time, \(\mathcal {F}_{\mathrm{mine}}\) flips a random coin with appropriate probability to decide if node i is eligible to send \({\mathsf{m}} \).

  • If node i has called \(\mathtt{mine}({\mathsf{m}})\) and the attempt is successful, anyone can then call \(\mathtt{verify}({\mathsf{m}}, i)\) to ascertain that indeed i is eligible to send \({\mathsf{m}} \).

Recall in our scheme, different types of messages are associated with different probabilities, and we assume that this is hard-wired in \(\mathcal {F}_{\mathrm{mine}} \) with the mapping \({\mathfrak {p}}\) that maps each message type to an appropriate probability (see Fig. 2). This \(\mathcal {F}_{\mathrm{mine}}\) functionality is secret since if a so-far-honest node i has not attempted to determine if it is eligible to send \({\mathsf{m}} \), then no corrupt node can learn whether i is in the committee corresponding to \({\mathsf{m}} \).

10. Instantiating \(\mathcal {F}_{\mathrm{mine}}\) in the real world

So far, all our protocols have assumed the existence of an \(\mathcal {F}_{\mathrm{mine}}\) ideal functionality. In this section, we describe how to instantiate the protocols in the real world (where \(\mathcal {F}_{\mathrm{mine}}\) does not exist) using cryptography. Technically we do not directly realize the ideal functionality \(\mathcal {F}_{\mathrm{mine}}\) in the sense of Canetti [8]—instead, we describe a real-world protocol that preserves all the security properties of the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocols.

1.1 10.1 Preliminary: adaptively secure non-Interactive zero-knowledge proofs

We use the same definitions for NIZK as in Groth et al. [26]. In particular, since we need adaptive security, the underlying NIZK must be adaptively secure as well. In the definitions below, the notion of “non-erasure computational zero-knowledge” captures the adaptive nature of the adversary, and the notion “perfect knowledge extraction” implies soundness.

We use \(f({\kappa }) \approx g({\kappa })\) to mean that there exists a negligible function \(\nu ({\kappa })\) such that \(|f({\kappa }) - g({\kappa })| < \nu ({\kappa })\).

A non-interactive proof system henceforth denoted \(\mathsf{nizk} \) for an NP language \({{\mathcal {L}}} \) consists of the following algorithms.

  • \({\mathsf{crs}} \leftarrow \mathsf{Gen}(1^{\kappa })\): Takes in a security parameter \({\kappa }\), a description of the language \({{\mathcal {L}}} \), and generates a common reference string \({\mathsf{crs}} \).

  • \(\pi \leftarrow \mathsf{P}({\mathsf{crs}}, {\mathsf{stmt}}, w)\): Takes in \({\mathsf{crs}} \), a statement \({\mathsf{stmt}} \), a witness w such that \(({\mathsf{stmt}}, w) \in {{\mathcal {L}}} \), and produces a proof \(\pi \).

  • \( b \leftarrow \mathsf{V}({\mathsf{crs}}, {\mathsf{stmt}}, \pi )\): Takes in a \({\mathsf{crs}} \), a statement \({\mathsf{stmt}} \), and a proof \(\pi \), and outputs 0 (reject) or 1 (accept).

Perfect completeness A non-interactive proof system is said to be perfectly complete, if an honest prover with a valid witness can always convince an honest verifier. More formally, for any \(({\mathsf{stmt}}, w) \in {{\mathcal {L}}} \), we have that

$$\begin{aligned}&\Pr \left[ {\mathsf{crs}} \leftarrow \mathsf{Gen}(1^{\kappa }),\right. \\&\quad \left. \pi \leftarrow \mathsf{P}({\mathsf{crs}}, {\mathsf{stmt}}, w): \mathsf{V}({\mathsf{crs}}, {\mathsf{stmt}}, \pi ) = 1 \right] = 1 \end{aligned}$$

Non-erasure computational zero-knowledge Non-erasure zero-knowledge requires that under a simulated CRS, there is a simulated prover that can produce proofs without needing the witness. Further, upon obtaining a valid witness to a statement a-posteriori, the simulated prover can explain the simulated NIZK with the correct witness.

We say that a proof system \(({\mathsf{Gen}}, \mathsf{P}, \mathsf{V})\) satisfies non-erasure computational zero-knowledge iff there exist probabilistic polynomial time algorithms \(({\mathsf{Gen}} _0, \mathsf{P}_0, \mathsf{Explain})\) such that

$$\begin{aligned}&\Pr \left[ {\mathsf{crs}} \leftarrow {\mathsf{Gen}} (1^{\kappa }), {\mathcal {A}} ^{\mathsf{Real}({\mathsf{crs}}, \cdot , \cdot )}({\mathsf{crs}}) = 1\right] \\&\quad \approx \Pr \left[ ({\mathsf{crs}} _0, {\tau } _0) \leftarrow {\mathsf{Gen}} _0(1^{\kappa }), {\mathcal {A}} ^{\mathsf{Ideal}({\mathsf{crs}} _0, {\tau } _0, \cdot , \cdot )}({\mathsf{crs}} _0) = 1\right] , \end{aligned}$$

where \(\mathsf{Real}({\mathsf{crs}}, {\mathsf{stmt}}, w)\) runs the honest prover \(\mathsf{P}({\mathsf{crs}}, {\mathsf{stmt}}, w)\) with randomness r and obtains the proof \(\pi \), it then outputs \((\pi , r)\); \(\mathsf{Ideal}({\mathsf{crs}} _0, {\tau } _0, {\mathsf{stmt}}, w)\) runs the simulated prover \(\pi \leftarrow \mathsf{P}_0({\mathsf{crs}} _0, {\tau } _0, {\mathsf{stmt}}, \rho )\) with randomness \(\rho \)Footnote 2 and without a witness, and then runs \(r' \leftarrow \mathsf{Explain}({\mathsf{crs}} _0, {\tau } _0, {\mathsf{stmt}}, w, \rho )\) and outputs \((\pi , r')\).

In the above, \({\mathsf{Gen}} _0\) is often called a simulated setup algorithm. The term \({\tau } _0\) is often called a trapdoor which is need to generate simulated proofs without the witness, and \({\mathsf{crs}} _0\) is often referred to as the simulated CRS.

Perfect knowledge extraction We say that a proof system \(({\mathsf{Gen}}, \mathsf{P}, \mathsf{V})\) satisfies perfect knowledge extraction, if there exists probabilistic polynomial-time algorithms \(({\mathsf{Gen}} _1, \mathsf{Extr})\), such that for all (even unbounded) adversary \({\mathcal {A}} \),

$$\begin{aligned}&\Pr \left[ {\mathsf{crs}} \leftarrow {\mathsf{Gen}} (1^{\kappa }): {\mathcal {A}} ({\mathsf{crs}}) = 1\right] \\&\quad = \Pr \left[ ({\mathsf{crs}} _1, {\tau } _1) \leftarrow {\mathsf{Gen}} _1(1^{\kappa }): {\mathcal {A}} ({\mathsf{crs}} _1) = 1\right] , \end{aligned}$$

and moreover,

$$\begin{aligned}&\Pr \left[ ({\mathsf{crs}} _1, {\tau } _1) \leftarrow {\mathsf{Gen}} _1(1^{\kappa }); ({\mathsf{stmt}}, \pi ) \leftarrow {\mathcal {A}} ({\mathsf{crs}} _1);\right. \\&\quad \left. w \leftarrow \mathsf{Extr}({\mathsf{crs}} _1, {\tau } _1, {\mathsf{stmt}}, \pi ): \begin{array}{l} \mathsf{V}({\mathsf{crs}} _1, {\mathsf{stmt}}, \pi ) = 1\\ \mathrm{but} \ ({\mathsf{stmt}}, w) \notin \mathcal {L} \end{array} \right] \\&\quad = 0 \end{aligned}$$

We often call \({\mathsf{Gen}} _1\) a simulated setup, which generates a simulated common reference string denoted \({\mathsf{crs}} _1\), and an extraction trapdoor \({\tau } _1\). the extraction trapdoor is needed to extract a witness from a proof submitted by the adversary. The notion of perfect knowledge extraction implies standard soundness. We will need the slightly stronger knowledge extraction notion, therefore, we define it this way.

1.2 10.2 Adaptively secure non-interactive commitment scheme

An adaptively secure non-interactive commitment scheme consists of the following algorithms:

  • \({\mathsf{crs}} \leftarrow \mathsf{Gen}(1^{\kappa })\): Takes in a security parameter \({\kappa }\), and generates a common reference string \({\mathsf{crs}} \).

  • \(C \leftarrow {\mathsf{com}} ({\mathsf{crs}}, v, \rho )\): Takes in \({\mathsf{crs}} \), a value v, and a random string \(\rho \), and outputs a committed value C.

  • \(b \leftarrow \mathsf{ver}({\mathsf{crs}}, C, v, \rho )\): Takes in a \({\mathsf{crs}} \), a commitment C, a purported opening \((v, \rho )\), and outputs 0 (reject) or 1 (accept).

Computationally hiding under selective opening We say that a commitment scheme \(({\mathsf{Gen}}, {\mathsf{com}}, \mathsf{ver})\) is computationally hiding under selective opening, iff there exist probabilistic polynomial time algorithms \(({\mathsf{Gen}} _0, {{\mathsf{com}}}_0, \mathsf{Explain})\) such that

$$\begin{aligned}&\Pr \left[ {\mathsf{crs}} \leftarrow {\mathsf{Gen}} (1^{\kappa }), {\mathcal {A}} ^{\mathsf{Real}({\mathsf{crs}}, \cdot )}({\mathsf{crs}}) = 1\right] \\&\quad \approx \Pr \left[ ({\mathsf{crs}} _0, {\tau } _0) \leftarrow {\mathsf{Gen}} _0(1^{\kappa }), {\mathcal {A}} ^{\mathsf{Ideal}({\mathsf{crs}} _0, {\tau } _0, \cdot )}({\mathsf{crs}} _0) = 1\right] \end{aligned}$$

where \(\mathsf{Real}({\mathsf{crs}}, v)\) runs the honest algorithm \({\mathsf{com}} ({\mathsf{crs}}, v, r)\) with randomness r and obtains the commitment C, it then outputs (Cr); \(\mathsf{Ideal}({\mathsf{crs}} _0, {\tau } _0, v)\) runs the simulated algorithm \(C \leftarrow \mathsf{comm}_0({\mathsf{crs}} _0, {\tau } _0, \rho )\) with randomness \(\rho \) and without v, and then runs \(r' \leftarrow \mathsf{Explain}({\mathsf{crs}} _0, {\tau } _0, v, \rho )\) and outputs \((C, r')\).

Perfectly binding A commitment scheme is said to be perfectly binding iff for every \({\mathsf{crs}} \) in the support of the honest CRS generation algorithm, there does not exist \((v, \rho ) \ne (v', \rho ')\) such that \({\mathsf{com}} ({\mathsf{crs}}, v, \rho ) = {\mathsf{com}} ({\mathsf{crs}}, v', \rho ')\).

The existence of such a NIZK scheme was shown by Groth et al. [26] via a building block that they called homomorphic proof commitment scheme. This building block can also be used to achieve a commitment scheme with the desired properties.

Theorem 16

(Instantiation of our NIZK and commitment schemes [26]) Assume standard bilinear group assumptions. Then, there exists a proof system that satisfies perfect completeness, non-erasure computational zero-knowledge, and perfect knowledge extraction. Further, there exists a commitment scheme that is perfectly binding and computationally hiding under selective opening.

1.3 10.3 NP language used in our construction

In our construction, we will use the following NP language \({{\mathcal {L}}} \). A pair \(({\mathsf{stmt}}, w) \in {{\mathcal {L}}} \) iff

  • parse \({\mathsf{stmt}}:= (\rho , c, {\mathsf{crs}} _{\mathrm{comm}}, {\mathsf{m}})\), parse \(w := (\textsf {sk}, s)\);

  • it must hold that \(c = \mathsf{comm}({\mathsf{crs}} _{\mathrm{comm}}, \textsf {sk}, s)\), and \(\mathsf{PRF} _{\textsf {sk}}({\mathsf{m}}) = \rho \).

1.4 10.4 Compiler from \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocols to real-world protocols

Our real-world protocol will remove the \(\mathcal {F}_{\mathrm{mine}}\) oracle by leveraging cryptographic building blocks including a pseudorandom function family, a non-interactive zero-knowledge proof system that satisfies computational zero-knowledge and computational soundness, and a perfectly binding and computationally hiding commitment scheme.

Earlier, we have described the intuition behind our approach. Hence in this section we directly provide a formal description of how to compile our \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocols into real-world protocols using cryptography. This compilation works for our previous \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocol described in Sect. 5. The high-level idea is to realize an adaptively secure VRF from adaptively secure PRFs and NIZKs:

  • Trusted PKI setup Upfront, a trusted party runs the CRS generation algorithms of the commitment and the NIZK scheme to obtain \({\mathsf{crs}} _{\mathrm{comm}}\) and \({\mathsf{crs}} _{\mathrm{nizk}}\). It then chooses a secret PRF key for every node, where the i-th node has key \(\textsf {sk}_i\). It publishes \(({\mathsf{crs}} _{\mathrm{comm}}, {\mathsf{crs}} _{\mathrm{nizk}})\) as the public parameters, and each node i’s public key denoted \(\textsf {pk}_i\) is computed as a commitment of \(\textsf {sk}_i\) using a random string \(s_i\). The collection of all users’ public keys is published to form the PKI, i.e., the mapping from each node i to its public key \(\textsf {pk}_i\) is public information. Further, each node i is given the secret key \((\textsf {sk}_i, s_i)\).

  • Instantiating \(\mathcal {F}_{\mathrm{mine}}.\mathtt{mine}\). Recall that in the ideal-world protocol a node i calls \(\mathcal {F}_{\mathrm{mine}}.\mathtt{mine}({\mathsf{m}})\) to mine a vote for a message \({\mathsf{m}} \). Now, instead, the node i calls \(\rho := \mathsf{PRF}_{\textsf {sk}_i}({\mathsf{m}})\), and computes the NIZK proof

    $$\begin{aligned} \pi := \mathsf{nizk}.\mathsf{P}((\rho , \textsf {pk}_i, {\mathsf{crs}} _{\mathrm{comm}}, {\mathsf{m}}), (\textsf {sk}_i, s_i)) \end{aligned}$$

    where \(s_i\) the randomness used in committing \(\textsf {sk}_i\) during the trusted setup. Intuitively, this zero-knowledge proof proves that the evaluation outcome \(\rho \) is correct w.r.t. the node’s public key (which is a commitment of its secret key).

    The mining attempt for \({\mathsf{m}} \) is considered successful if \(\rho < D\) where D is an appropriate difficulty parameter (either \(D_0\) or \(D_1\) defined in Sect. 4.2) that controls the probability that the message \({\mathsf{m}} \) to be mined.

  • New message format. Recall that earlier in our \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocols, every message multicast by a so-far-honest node i must of one of the following forms:

    • Mined messages of the form \(({\mathsf{m}}, i)\) where node i has successfully called \(\mathcal {F}_{\mathrm{mine}}.\mathtt{mine}({\mathsf{m}})\); For example, in the synchronous honest majority protocol (Sect. 5), \({\mathsf{m}} \) can be of the form \((\mathtt{T}, r, b)\) where \(\mathtt{T} \in \{\mathtt{Propose}, \mathtt{Vote}, \mathtt{Commit}, \mathtt{Status}\}\), r denotes an epoch number, and \(b \in \{0, 1, \bot \}\); or of the form \((\mathtt{Terminate}, b)\).

    • Compound messages, i.e., a concatenation of the above types of messages.

    For every mined message \(({\mathsf{m}}, i)\) that is either stand-alone or contained in a compound message, in the real-world protocol, we rewrite \(({\mathsf{m}}, i)\) as \(({\mathsf{m}}, i, \rho , \pi )\) where the terms \(\rho \) and \(\pi \) are defined in the most natural manner:

    • If \(({\mathsf{m}}, i)\) is part of a message that a so-far-honest node i wants to multicast, then the terms \(\rho \) and \(\pi \) are those generated by i in place of calling \(\mathcal {F}_{\mathrm{mine}}.\mathtt{mine}({\mathsf{m}})\) in the real world (as explained above);

    • Else, if \(({\mathsf{m}}, i)\) is part of a message that a so-far-honest node \(j \ne i\) wants to multicast, it must be that j has received a valid real-world tuple \(({\mathsf{m}}, i, \rho , \pi )\) where validity will be defined shortly, and thus \(\rho \) and \(\pi \) are simply the terms contained in this tuple.

  • Instantiating \(\mathcal {F}_{\mathrm{mine}}.\mathtt{verify}\). In the ideal world, a node would call \(\mathcal {F}_{\mathrm{mine}}.\mathtt{verify}\) to check the validity of mined messages upon receiving them (possibly contained in compound messages). In the real-world protocol, we perform the following instead: upon receiving the mined message \(({\mathsf{m}}, i, \rho , \pi )\) that is possibly contained in compound messages, a node can verify the message’s validity by checking:

    1. 1.

      \(\rho < D\) where D is an appropriate difficulty parameter that depends on the type of the mined message (either \(D_0\) or \(D_1\)); and

    2. 2.

      \(\pi \) is indeed a valid NIZK for the statement formed by the tuple \((\rho , \textsf {pk}_i, {\mathsf{crs}} _{\mathrm{comm}}, {\mathsf{m}})\). The tuple is discarded unless both checks pass.

1.5 10.5 Main theorems for real-world protocols

After applying the above compiler to our \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocols described in Sects. 5 and 6, we obtain our real-world protocol. In this section, we present our main theorem statements for these three settings. The proofs for these theorems can be derived by combining the proofs in Sect. 5 and 6 as well as those in the following section, i.e., Appendix 11, where we will show that the relevant security properties are preserved in the real world as long as the cryptographic building blocks are secure.

In theorem statement below, when we say that “assume that the cryptographic building blocks employed are secure”, we formally mean that 1) the pseudorandom function family employed is secure; 2) the non-interactive zero-knowledge proof system that satisfies non-erasure computational zero-knowledge and perfect knowledge extraction; 3) the commitment scheme is computationally hiding under selective opening and perfectly binding; and 4) the signature scheme is secure (if relevant).

Theorem 17

(Sub-quadratic BA under Synchrony) Let \(\pi _{\mathrm{sync}}\) be the protocol obtained by applying the above compiler to the protocol in Sect. 5, and assume that the cryptographic building blocks employed are secure. Then, for any arbitrarily small positive constant \(\epsilon \), any \(n \in {\mathbb {N}}\), \(\pi _{\mathrm{sync}}\) satisfies consistency and validity, and tolerates \(f<(\frac{1}{2}-\epsilon )n\) adaptive corruptions, except for \(\mathsf{negl}(\kappa )\) probability. Further, \(\pi _{\mathrm{sync}}\) achieves expected constant round and \(\chi \cdot \mathsf{poly} \log (\kappa )\) multicast complexity. In the above, \(\chi \) is a security parameter related to the hardness of the cryptographic building blocks; \(\chi =\mathsf{poly} (\kappa )\) under standard cryptographic assumptions and \(\chi =\mathsf{poly} \log (\kappa )\) if we assume sub-exponential security of the cryptographic primitives employed.

Theorem 18

(Sub-quadratic BA under Partial Synchrony) Let \(\pi _{\mathrm{partialsync}}\) be the protocol obtained by applying the above compiler to the protocol in Sect. 6, and assume that the cryptographic building blocks employed are secure. Then, for any arbitrarily small positive constant \(\epsilon \), any \(n \in {\mathbb {N}}\), \(\pi _{\mathrm{partialsync}}\) satisfies consistency and validity, and tolerates \(f<(\frac{1}{3}-\epsilon )n\) adaptive corruptions, except for \(\mathsf{negl}(\kappa )\) probability. Further, \(\pi _{\mathrm{partialsync}}\) achieves expected \(\varDelta \cdot \mathsf{poly} \log (\kappa )\) rounds and \(\chi \cdot \mathsf{poly} \log (\kappa ) \cdot \log \varDelta \) multicast complexity. In the above, \(\chi \) is a security parameter related to the hardness of the cryptographic building blocks; \(\chi =\mathsf{poly} (\kappa )\) under standard cryptographic assumptions and \(\chi =\mathsf{poly} \log (\kappa )\) if we assume sub-exponential security of the cryptographic primitives employed.

Proof

Proofs for the above two theorems can be obtained by combining the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid analysis in Sect. 5 or 6 with Appendix 11 where we show that the relevant security properties are preserved in by the real world protocol. \(\square \)

11. Real world is as secure as the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid world

In this section, we prove that our real-world protocol is just as secure as the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocol. To do this, we do not cryptographically realize \(\mathcal {F}_{\mathrm{mine}}\). Instead, our proof in essence defines a sequence of hybrids that gradually modifies the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocol to the real-world protocol, andwe prove that each step of modification does not decrease the security of the protocol (except for negligible amounts). We stress that this is different from the more typical proof that argues the adversary’s views in adjacent hybrids to be computationally indistinguishable.One tricky aspect of the proof stems from the fact that we consider adaptive corruption. Wheneverthe real-world adversary adaptively corrupts someone, the player’s PRF key is revealed to the adversary.For this reason, our proof requires a PRF that resists selective-opening attacks, i.e., honest players’ PRFs shouldstill be secure even when the adversary can adaptively corrupt some players and learn their secret keys. In Sect. 1, we define this selective opening notion of PRF security, and prove that we can get this notionfrom the standard PRF security. The most technical part of the proof is described in Sect. 3, where we gradually replace the PRF outcomes with random values as in the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid world. To achieve this, we describe a hybrid sequence that essentially goes back in time, where the k-th hybrid step replaces the honest users’ PRF outcomes after the k-th corruption query with random coins. To make this hybrid sequence go through, we define some polynomial-time checkable bad events in Sect. 2.

1.1 11.1 Preliminary: PRF’s security under selective opening

Our proof will directly rely on the security of a PRF under selective opening attacks. We will prove that any secure PRF family is secure under selective opening with a polynomial loss in the security.

Pseudorandomness under selective opening We consider a selective opening adversary that interacts with a challenger. The adversary can request to create new PRF instances, query existing instances with specified messages, selectively corrupt instances and obtain the secret keys of these instances, and finally, we would like to claim that for instances that have not been corrupt, the adversary is unable to distinguish the PRFs’ evaluation outcomes on any future message from random values from an appropriate domain. More formally, we consider the following game between a challenger \({\mathcal {C}} \) and an adversary \({\mathcal {A}} \).

\(\mathsf{Expt}^{{\mathcal {A}}}_b(1^\kappa )\):

  • \({\mathcal {A}} (1^\kappa )\) can adaptively interact with \({\mathcal {C}} \) through the following queries:

    • Create instance. The challenger \({\mathcal {C}} \) creates a new PRF instance by calling the honest \(\mathsf{Gen}(1^\kappa )\). Henceforth, the instance will be assigned an index that corresponds to the number of “create instance” queries made so far. The i-th instance’s secret key will be denoted \(\textsf {sk}_i\).

    • Evaluate. The adversary \({\mathcal {A}} \) specifies an index i that corresponds to an instance already created and a message \({\mathsf{m}} \), and the challenger computes \(r \leftarrow \mathsf{PRF}_{\textsf {sk}_i}({\mathsf{m}})\) and returns r to \({\mathcal {A}} \).

    • Corrupt. The adversary \({\mathcal {A}} \) specifies an index i, and the challenger \({\mathcal {C}} \) returns \(\textsf {sk}_i\) to \({\mathcal {A}} \) (if the i-th instance has been created).

    • Challenge. The adversary \({\mathcal {A}} \) specifies an index \(i^*\) that must have been created and a message \({\mathsf{m}} \). If \(b = 0\), the challenger returns a completely random string of appropriate length. If \(b = 1\), the challenger computes \(r \leftarrow \mathsf{PRF}_{\textsf {sk}_{i^*}}({\mathsf{m}})\) and returns r to the adversary.

We say that \({\mathcal {A}} \) is compliant iff with probability 1, every challenge tuple \((i^*, {\mathsf{m}})\) it submits satisfies the following: 1) \({\mathcal {A}} \) does not make a corruption query on \(i^*\) throughout the game; and 2) \({\mathcal {A}} \) does not make any evaluation query on the tuple \((i^*, {\mathsf{m}})\).

Definition 1

(Selective opening security of a PRF family) We say that a PRF scheme satisfies pseudorandomness under selective opening iff for any compliant \(\text {p.p.t.}\) adversary \({\mathcal {A}} \), its views in \(\mathsf{Expt}^{{\mathcal {A}}}_0(1^\kappa )\) and \(\mathsf{Expt}^{{\mathcal {A}}}_1(1^\kappa )\) are computationally indistinguishable.

Theorem 19

Any secure PRF family satisfies pseudorandomness under selective opening by Definition 1 (with polynomial loss in the security reduction).

Proof

Single-selective-challenge selective opening security In the single-selective challenge version of the game, the adversary commits to a challenge identifier \(i^*\) upfront during the security game, such that later, challenge queries can only be made for the committed index \(i^*\).

First, we can show that any secure PRF family would satisfy single-selective-challenge selective opening security. Suppose that there is an efficient adversary \({\mathcal {A}} \) that can break the single-selective-challenge selective opening security game for some PRF family. We construct a reduction \({\mathcal {R}} \) that leverages \({\mathcal {A}} \) to break the PRF’s security. The reduction \({\mathcal {R}} \) interacts with a PRF challenger as well as \({\mathcal {A}} \). \({\mathcal {R}} \) generates PRF keys for all instances other than \(i^*\) and answers non-\(i^*\) evaluation and corruption queries honestly. For \(i^*\), \({\mathcal {A}} \)’s evaluation requests are forwarded to the PRF challenger.

We consider the following three hybrids:

  1. 1.

    The PRF challenger has a real, randomly sampled PRF function from the corresponding family, and \({\mathcal {R}} \) answers \({\mathcal {A}} \)’s challenge queries on \(i^*\) with random answers;

  2. 2.

    The PRF challenger has a random function, and \({\mathcal {R}} \) answers \({\mathcal {A}} \)’s challenge queries on \(i^*\) by forwarding the PRF challenger’s answers (or equivalently by relying with random answers); and

  3. 3.

    The PRF challenger has a real, randomly sampled PRF function from the corresponding family, and \({\mathcal {R}} \) answers \({\mathcal {A}} \)’s challenge queries on \(i^*\) by forwarding the PRF challenger’s answers.

It is not difficult to see that \({\mathcal {A}} \)’s view in hybrid 1 is identical to its view in the single-selective challenge selective opening security game when \(b = 0\); its view in hybrid 3 is identical to its view in the single-selective challenge selective opening security game when \(b = 1\). Due to the security of the PRF, it is not difficult to see that any adjacent pair of hybrids are indistinguishable.

Single-challenge selective opening security In the single-challenge selective opening version of the game, the adversary can only make challenge queries for a single \(i^*\) but it need not commit to \(i^*\) upfront at the beginning of the security game.

We now argue that any PRF that satisfies single-selective-challenge selective opening security must satisfy single-challenge selective opening security with a polynomial security loss. The proof of this is straightforward. Suppose that there is an efficient adversary \({\mathcal {A}} \) that can break the single-challenge selective opening security of some PRF family, we can then construct an efficient reduction \({\mathcal {R}} \) that breaks the single-selective-challenge selective opening security of the PRF family. Basically the reduction \({\mathcal {R}} \) guesses at random upfront which index \(i^*\) the adversary \({\mathcal {A}} \) will choose for challenge queries. \({\mathcal {R}} \) then forwards all of \({\mathcal {A}} \)’s queries to the challenger of the single-selective-challenge selective opening security security game. If the guess later turns out to be wrong, the reduction simply aborts and outputs a random guess \(b'\). Otherwise, it outputs the same output as \({\mathcal {A}} \). Suppose that \({\mathcal {A}} \) creates q instances of PRFs then we can conclude that \({\mathcal {R}} \) guesses correctly with probability at least 1/q. Thus whatever advantage \({\mathcal {A}} \) has in breaking the single-challenge selective opening security, \({\mathcal {R}} \) has an advantage that is 1/q fraction of \({\mathcal {A}} \)’s advantage in breaking the single-selective-challenge selective opening security of the PRF family.

Selective opening security Finally, we show that any PRF family that satisfies single-challenge selective opening security must also satisfy selective opening security (i.e., Definition 1) with a polynomial security loss. This proof can be completed through a standard hybrid argument in which we replace the challenge queries from real to random one index at a time (where replacement is performed for all queries of the i-th new index that appeared in some challenge query). \(\square \)

1.2 11.2 Definition of polynomial-time checkable stochastic bad events

In all of our \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocols earlier, some stochastic bad events related to \(\mathcal {F}_{\mathrm{mine}}\) ’s random coins can lead to the breach of protocol security (i.e., consistency, validity, or termination) These stochastic bad events are of the form imprecisely speaking: either there are too few honest mining successes or there are too many corrupt mining successes.Footnote 3 More formally, for the honest majority protocol, the stochastic bad events are stated in Lemmas 234, and 6.

For these stochastic bad events, there is a polynomial-time predicate henceforth denoted F, that takes in 1) all honest and corrupt mining attempts and the rounds in which the attempts are made (for a fixed \(\textsf {view}\)) and 2) \(\mathcal {F}_{\mathrm{mine}}\) ’s coins as a result of these mining attempts, and outputs 0 or 1, indicating whether the bad events are true for this specific \(\textsf {view}\). Recall that \(\textsf {view}\) denotes an execution trace.

In our earlier \(\mathcal {F}_{\mathrm{mine}}\)-world analyses (in Section  5), although we have not pointed out this explicitly, but our proofs actually suggest that the stochastic bad events defined by F happen with small probability even when \({\mathcal {A}} \) and \({\mathcal {Z}}\) are computationally unbounded—this is because the \(\mathcal {F}_{\mathrm{mine}}\)-world protocol does not use any cryptography such as VRF, but abstracts the cryptography with an ideal functionality \(\mathcal {F}_{\mathrm{mine}}\).

The majority of this section will focus on bounding the second category of failures, i.e., stochastic bad events defined by the polynomial-time predicate F (where F may be a different predicate for each protocol).

For simplicity, we shall call our \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocol \(\varPi _{\mathrm{ideal}} \)—for the three different protocols, \(\varPi _{\mathrm{ideal}} \) is a different protocol; nonetheless, the same proofs hold for all three protocols.

Below we will start from the \(\mathcal {F}_{\mathrm{mine}}\) hybrid protocol and through a sequence of hybrids, arrive at the real-world protocol that uses actual cryptography. The same proof works for both the synchronous protocol and partially synchronous protocol.

1.3 11.3 Hybrid 1

Hybrid 1 is defined just like our earlier \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocol but with the following modifications:

  • \(\mathcal {F}_{\mathrm{mine}}\) chooses random PRF keys for all nodes at the very beginning, and let \(\textsf {sk}_i\) denote the PRF key chosen for the i-th node.

  • Whenever a node i makes a \(\mathtt{mine}({\mathsf{m}})\) query, \(\mathcal {F}_{\mathrm{mine}}\) determines the outcome of the coin flip as follows: compute \(\rho \leftarrow \mathsf{PRF}_{\textsf {sk}_i}({\mathsf{m}})\) and use \(\rho < D\) as the coin.

  • Whenever \({\mathcal {A}} \) adaptively corrupts a node i, \(\mathcal {F}_{\mathrm{mine}}\) discloses \(\textsf {sk}_i\) to \({\mathcal {A}} \).

Lemma 8

For any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \), there exists a negligible function \(\mathsf{negl}(\cdot )\) such that for any \(\kappa \), the bad events defined by F do not happen in Hybrid 1 with probability \(1 - \mathsf{negl}(\kappa )\).

Proof

Let f be the number of adaptive corruptions made by \({\mathcal {A}} \). To prove this lemma we must go through a sequence of inner hybrids over the number of adaptive corruptions made by the adversary \({\mathcal {A}} \). \(\square \)

Hybrid 1.f. Hybrid 1.f is defined almost identically as Hybrid 1 except the following modifications: Suppose that \({\mathcal {A}} \) makes the last corruption query in round t and for node i. Whenever the ideal functionality \(\mathcal {F}_{\mathrm{mine}}\) in Hybrid 1 would have called \(\mathsf{PRF}_{\textsf {sk}_j}({\mathsf{m}})\) for any j that is honest-forever and in some round \(t' \ge t\), in Hybrid 1.f, we replace this call with a random string.

Claim

Suppose that the PRF scheme satisfies pseudorandomness under selective opening. Then, if for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \) and any \(\kappa \), the bad events defined by F do not happen in Hybrid 1.f with probability at least \(\mu (\kappa )\), then for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) and \(\kappa \), the bad events defined by F do not happen in Hybrid 1 with probability at least \(\mu (\kappa ) - \mathsf{negl}(\kappa )\).

Proof

Suppose for the sake of contradiction that the claim does not hold. We can then construct a PRF adversary \({\mathcal {A}} '\) that breaks pseudorandomness under selective opening with non-negligible probability. \({\mathcal {A}} '\) plays \(\mathcal {F}_{\mathrm{mine}} \) when interacting with \({\mathcal {A}} \). \({\mathcal {A}} '\) is also interacting with a PRF challenger. In the beginning, for every node, \({\mathcal {A}} '\) asks the PRF challenger to create a PRF instance for that node. Whenever \(\mathcal {F}_{\mathrm{mine}}\) needs to evaluate a PRF, \({\mathcal {A}} '\) forwards the query to the PRF challenger. This continues until \({\mathcal {A}} \) makes the last corruption query, i.e., the f-th corruption query—suppose this last corruption query is made in round t and the node to corrupt is i. At this moment, \({\mathcal {A}} '\) discloses \(\textsf {sk}_i\) to the adversary. However, whenever Hybrid 1 would have needed to compute \(\mathsf{PRF}_{\textsf {sk}_j}({\mathsf{m}})\) for any j that is honest-forever and in some round \(t' \ge t\), \({\mathcal {A}} '\) makes a challenge query to the PRF challenger for the j-th PRF instance and on the message queried. Notice that if the PRF challenger returned random answers to challenges, \({\mathcal {A}} \)’s view in this interation would be identically distributed as Hybrid 1.f. Otherwise, if the PRF challenger returned true answers to challenges, \({\mathcal {A}} \)’s view in this interation would be identically distributed as Hybrid 1. \(\square \)

Hybrid \(1.f'\). Hybrid \(1.f'\) is defined almost identically as Hybrid 1.f except the following modification: whenever \({\mathcal {A}} \) makes the last corruption query—suppose that this query is to corrupt node i and happens in round t—the ideal functionality \(\mathcal {F}_{\mathrm{mine}}\) does not disclose \(\textsf {sk}_i\) to \({\mathcal {A}} \).

Claim

If for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \) and any \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.f'\) with probability at least \(\mu (\kappa )\), then for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) and \(\kappa \), the bad events defined by F do not happen in Hybrid 1.f with probability at least \(\mu (\kappa )\).

Proof

We observe the following: once the last corruption query is made in round t for node i, given that for any \(t' \ge t\), any honest-forever node’s coins are completely random. Thus whether or not the adversary receives the last corruption key does not help it to cause the relevant bad events to occur. Specifically in this case, at the moment the last corruption query is made—without loss of generality assume that the adversary makes all possible corrupt mining attempts—then whether the polynomial-checkable bad events defined by F take place is fully determined by \(\mathcal {F}_{\mathrm{mine}}\) ’s random coins and independent of any further actions of the adversary at this point. \(\square \)

Hybrid \(1.f''\). Hybrid \(1.f''\) is defined almost identically as Hybrid \(1.f'\) except the following modification: suppose that the last corruption query is to corrupt node i and happens in round t; whenever the ideal functionality \(\mathcal {F}_{\mathrm{mine}}\) in Hybrid \(1.f'\) would have called \(\mathsf{PRF}(\textsf {sk}_i, {\mathsf{m}})\) in some round \(t' \ge t\) (for the node i that is last corrupt), in Hybrid \(1.f''\), we replace this call’s outcome with a random string.

Claim

Suppose that the PRF scheme satisfies pseudorandomness under selective opening. Then, if for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \) and any \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.f''\) with probability at least \(\mu (\kappa )\), then for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) and \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.f'\) with probability at least \(\mu (\kappa ) - \mathsf{negl}(\kappa )\).

Proof

Suppose for the sake of contradiction that the claim does not hold. We can then construct a PRF adversary \({\mathcal {A}} '\) that breaks pseudorandomness under selective opening with non-negligible probability. \({\mathcal {A}} '\) plays the \(\mathcal {F}_{\mathrm{mine}} \) when interacting with \({\mathcal {A}} \). \({\mathcal {A}} '\) is also interacting with a PRF challenger. In the beginning, for every node, \({\mathcal {A}} '\) asks the PRF challenger to create a PRF instance for that node. Whenever \(\mathcal {F}_{\mathrm{mine}}\) needs to evaluate a PRF, \({\mathcal {A}} '\) forwards the query to the PRF challenger. This continues until \({\mathcal {A}} \) makes the last corruption query, i.e., the f-th corruption query—suppose this last corruption query is made in round t and the node to corrupt is i. At this moment, \({\mathcal {A}} '\) does not disclose \(\textsf {sk}_i\) to the adversary and does not query the PRF challenger to corrupt i’s secret key either. Furthermore, whenever Hybrid \(1.f'\) would have called \(\mathsf{PRF}_{\textsf {sk}_i}({\mathsf{m}})\) in some round \(t' \ge t\), \({\mathcal {A}} \) now calls the PRF challenger for the i-th PRF instance and on the specified challenge message, it uses the answer from the PRF challenger to replace the \(\mathsf{PRF}_{\textsf {sk}_i}({\mathsf{m}})\) call. Notice that if the PRF challenger returned random answers to challenges, \({\mathcal {A}} \)’s view in this interation would be identically distributed as Hybrid \(1.f''\). Otherwise, if the PRF challenger returned true answers to challenges, \({\mathcal {A}} \)’s view in this interation would be identically distributed as Hybrid \(1.f'\). \(\square \)

We can extend the same argument continuing with the following sequence of hybrids such that we can replace more and more PRF evaluations at the end with random coins, and withhold more and more PRF secret keys from \({\mathcal {A}} \) upon adaptive corruption queries—and nonetheless the probability that the security properties get broken will not be affected too much.

Hybrid \(1.(f-1)\). Suppose that \({\mathcal {A}} \) makes the last but second corruption query for node i and in round t. Now, for any node j that is still honest in round t (not including node i), if \(\mathsf{PRF}_{\textsf {sk}_j}({\mathsf{m}})\) is needed by the ideal functionality in some round \(t' \ge t\), the PRF call’s outcome will be replaced with random. Otherwise Hybrid \(1.(f-1)\) is the same as Hybrid \(1.f''\).

Claim

Suppose that the PRF scheme satisfies pseudorandomness under selective opening. Then, if for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \) and any \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.(f-1)\) with probability at least \(\mu (\kappa )\), then for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) and \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.f''\) with probability at least \(\mu (\kappa ) - \mathsf{negl}(\kappa )\).

Proof

Similar to the reduction between the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocol and Hybrid 1.f. \(\square \)

Hybrid \(1.(f-1)'\). Almost the same as Hybrid \(1.(f-1)\), but without disclosing the secret key to \({\mathcal {A}} \) upon the last but second corruption query.

Claim

If for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \) and any \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.(f-1)'\) with probability at least \(1 - \mu (\kappa )\), then for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) and \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.(f-1)\) with probability at least \(1 - \mu (\kappa )\).

Proof

The proof is similar to the reduction between Hybrid 1.f and Hybrid \(1.f'\), but with one more subtlety: in Hybrid \(1.(f-1)\), upon making the last but second adaptive corruption query for node i in round t, for any \(t' \ge t\) and any node honest in round t (not including i but including the last node to corrupt), all coins are random. Due to this, we observe that if there is a \(\text {p.p.t.}\) adversary \({\mathcal {A}} \) that can cause the bad events defined by F to occur with probability \(\mu \) for Hybrid \(1.(f-1)\), then there is another \(\text {p.p.t.}\) adversary \({\mathcal {A}} '\) such that upon making the last but second corruption query, it would immediately make the last corruption query in the same round as t corrupting an arbitrary node (say, the one with the smallest index that is not corrupt yet), and \({\mathcal {A}} '\) can cause the bad events defined by F to occur with probability at least \(\mu \) in Hybrid \(1.(f-1)\).

Now, we argue that if such an \({\mathcal {A}} '\) can cause the bad events defined by F to occur in Hybrid \(1.(f-1)\) with probability \(\mu \), there must be an adversary \({\mathcal {A}} ''\) that can cause the bad events defined by F to occur in Hybrid \(1.(f-1)'\) with probability \(\mu \) too. In particular, \({\mathcal {A}} ''\) will simply run \({\mathcal {A}} '\) until \({\mathcal {A}} '\) makes the last but second corruption query. At this point \({\mathcal {A}} ''\) makes an additional corruption query for an arbitrary node that is not yet corrupt. At this point, clearly whether bad events defined by F would occur is independent of any further action of the adversary—and although in Hybrid \(1.(f-1)'\), \({\mathcal {A}} ''\) does not get to see the secret key corresponding to the last but second query, it still has the same probability of causing the relevant bad events to occur as the adversary \({\mathcal {A}} '\) in Hybrid \(1.(f-1)\). \(\square \)

Hybrid \(1.(f-1)''\). Suppose that \({\mathcal {A}} \) makes the last but second corruption query for node i and in round t. Now, for any node j that is still honest in round t as well as node \(j = i\), if the ideal functionality needs to call \(\mathsf{PRF}_{\textsf {sk}_j}({\mathsf{m}})\) in some round \(t' \ge t\) the PRF’s outcome will be replaced with random. Otherwise Hybrid \(1.(f-1)''\) is identical to \(1.(f-1)'\).

Due to the same argument as used in the claim in Hybrid 1.f, we may conclude that if for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \) and any \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.(f-1)''\) with probability at least \(\mu (\kappa )\), then for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) and \(\kappa \), the bad events defined by F do not happen in Hybrid \(1.(f-1)'\) with probability at least \(\mu (\kappa ) - \mathsf{negl}(\kappa )\).

In this manner, we define a sequence of hybrids till in the end, we reach the following hybrid:

Hybrid 1.0. All PRFs evaluations in Hybrid 1 are replaced with random, and no secret keys are disclosed to \({\mathcal {A}} \) upon any adaptive corruption query.

It is not difficult to see that Hybrid 1.0 is identically distributed as the \(\mathcal {F}_{\mathrm{mine}}\)-hybrid protocol. We thus conclude the proof of Lemma 8.

1.4 11.4 Hybrid 2

Hybrid 2 is defined almost identically as Hybrid 1, except that now the following occurs:

  • Upfront, \(\mathcal {F}_{\mathrm{mine}}\) generates an honest CRS for the commitment scheme and the NIZK scheme and discloses the CRS to \({\mathcal {A}} \).

  • Upfront, \(\mathcal {F}_{\mathrm{mine}}\) not only chooses secret keys for all nodes, but commits to the secret keys of these nodes, and reveals the commitments to \({\mathcal {A}} \).

  • Every time \(\mathcal {F}_{\mathrm{mine}}\) receives a \(\mathtt{mine}\) query from a so-far-honest node i and for the message \({\mathsf{m}} \), it evaluates \(\rho \leftarrow \mathsf{PRF}_{\textsf {sk}_i}({\mathsf{m}})\) and compute a NIZK proof denoted \(\pi \) to vouch for \(\rho \). Now, \(\mathcal {F}_{\mathrm{mine}}\) returns \(\rho \) and \(\pi \) to \({\mathcal {A}} \).

  • Whenever a node i becomes corrupt, \(\mathcal {F}_{\mathrm{mine}}\) reveals all secret randomness node i has used in commitments and NIZKs so far to \({\mathcal {A}} \) in addition to revealing its PRF secret key \(\textsf {sk}_i\).

Lemma 9

Suppose that the commitment scheme is computationally adaptive hiding under selective opening, and the NIZK scheme is non-erasure computational zero-knowledge, Then, for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \), there exists a negligible function \(\mathsf{negl}(\cdot )\) such that for any \(\kappa \), the bad events defined by F do not happen in Hybrid 2 with probability \(1 - \mathsf{negl}(\kappa )\).

Proof

The proof is standard and proceeds in the following internal hybrid steps.

  • Hybrid 2.A Hybrid 2.A is the same as Hybrid 2 but with the following modifications. \(\mathcal {F}_{\mathrm{mine}}\) calls simulated NIZK key generation instead of the real one, and for nodes that remain honest so-far, \(\mathcal {F}_{\mathrm{mine}}\) simulate their NIZK proofs without needing the nodes’ PRF secret keys. Whenever an honest node i becomes corrupt, \(\mathcal {F}_{\mathrm{mine}}\) explains node i’s simulated NIZKs using node i’s real \(\textsf {sk}_i\) and randomness used in its commitment, and supplies the explanations to \({\mathcal {A}} \). \(\square \)

Claim

Hybrid 2.A and Hybrid 2 are computationally indistinguishable from the \(\textsf {view}\) of \({{\mathcal {Z}}} \).

Proof

Straightforward due to the non-erasure computational zero-knowledge property of the NIZK. \(\square \)

  • Hybrid 2.B Hybrid 2.B is almost identical to Hybrid 2.A but with the following modifications. \(\mathcal {F}_{\mathrm{mine}}\) calls the simulated CRS generation for the commitment scheme. When generating public keys for nodes, it computes simulated commitments without using the nodes’ real \(\textsf {sk}_i\)’s. When a node i becomes corrupt, it will use the real \(\textsf {sk}_i\) to compute an explanation for the earlier simulated commitment. Now this explanation is supplied to the NIZK’s explain algorithm to explain the NIZK too.

Claim

Hybrid 2.A and Hybrid 2.B are computationally indistinguishable from the view of the environment \({{\mathcal {Z}}} \).

Proof

Straightforward by the “computational hiding under selective opening” property of the commitment scheme. \(\square \)

Claim

If for any \(\text {p.p.t.}\) \({({\mathcal {A}}, \mathcal {Z})} \) and any \(\kappa \), the bad events defined by F do not happen in Hybrid 1 with probability at least \(\mu (\kappa )\), then for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) and \(\kappa \), then the bad events defined by F do not happen in Hybrid 2.B with probability at least \(\mu (\kappa )\).

Proof

Given an adversary \({\mathcal {A}} \) that attacks Hybrid 2.B, we can construct an adversary \({\mathcal {A}} '\) that attacks Hybrid 1. \({\mathcal {A}} '\) will run \({\mathcal {A}} \) internally. \({\mathcal {A}} '\) runs the simulated CRS generations algorithms for the commitment and NIZK, and sends the simulated CRSes to \({\mathcal {A}} \). It then runs the simulated commitment scheme and sends simulated commitments to \({\mathcal {A}} \) (of randomly chosen \(\textsf {sk}_i\) for every i). Whenever \({\mathcal {A}} \) tries to mine a message, \({\mathcal {A}} '\) can intercept this mining request, forward it to its own \(\mathcal {F}_{\mathrm{mine}}\). If successful, \({\mathcal {A}} '\) can sample a random number \(\rho > D\); else it samples a random number \(\rho \le D\). It then calls the simulated NIZK prover using \(\rho \) to simulate a NIZK proof and sends it to \({\mathcal {A}} \). Whenever \({\mathcal {A}} \) wants to corrupt a node i, \({\mathcal {A}} '\) corrupts it with its \(\mathcal {F}_{\mathrm{mine}}\), obtains \(\textsf {sk}_i\), and then runs the \(\mathsf{Explain}\) algorithms of the commitment and NIZK schemes and discloses the explanations to \({\mathcal {A}} \). Clearly \({\mathcal {A}} \)’s view in this protocol is identically distributed as in Hybrid 2.B. Moreover, if \({\mathcal {A}} \) succeedings in causing the bad events defined by F to happen, clearly \({\mathcal {A}} '\) will too. \(\square \)

1.5 11.5 Hybrid 3

Hybrid 3 is almost identical as Hybrid 2 except with the following modifications. Whenever an already corrupt node makes a mining query to \(\mathcal {F}_{\mathrm{mine}}\), it must supply a \(\rho \) and a NIZK proof \(\pi \). \(\mathcal {F}_{\mathrm{mine}}\) then verifies the NIZK proof \(\pi \), and if verification passes, it uses \(\rho < D\) as the result of the coin flip.

Lemma 10

Assume that the commitment scheme is perfectly binding, and the NIZK scheme satisfies perfect knowledge extraction. Then, for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\), there exists a negligible function \(\mathsf{negl}(\cdot )\) such that for any \(\kappa \), the bad events defined by F do not happen in Hybrid 3 except with probability \(\mathsf{negl}(\kappa )\).

Proof

We can replace the NIZK’s CRS generation \({\mathsf{Gen}} \) with \({\mathsf{Gen}} _1\) which generates a CRS that is identically distributed as the honest \({\mathsf{Gen}} \), but additionally generates an extraction trapdoor denoted \({\tau } _1\) (see the definition of Perfect Knowledge Extraction earlier in Sect. 1). Now, upon receiving \({\mathcal {A}} \)’s NIZK proof \(\pi \), \(\mathcal {F}_{\mathrm{mine}}\) performs extraction. The lemma follows by observing that due to the perfect knowledge extraction of the NIZK and the perfect binding property of the commitment scheme, it holds except with negligible probability that the extracted witness does not match the node’s PRF secret key that \(\mathcal {F}_{\mathrm{mine}} \) had chosen upfront. \(\square \)

In the lemma below, when we say that “assume that the cryptographic building blocks employed are secure”, we formally mean that the pseudorandom function family employed is secure; the non-interactive zero-knowledge proof system that satisfies non-erasure computational zero-knowledge and perfect knowledge extraction; the commitment scheme is computationally hiding under selective opening and perfectly binding; and for the synchronous honest majority protocol, additionally assume that the signature scheme is secure.

Lemma 11

Assume the cryptographic building blocks employed are secure. Then, for any \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\), there exists a negligible function \(\mathsf{negl}(\cdot )\) such that for any \(\kappa \in {\mathbb {N}}\), relevant security properties (including consistency, validity, and termination) are preserved with all but \(\mathsf{negl}(\kappa )\) probability in Hybrid 3.

Proof

As mentioned, only two types of bad events can possibly lead to breach of the relevant security properties: 1) signature failure; and 2) bad events defined by F. Thus the lemma follows in a straightforward fashion by taking a union bound over the two. \(\square \)

1.6 11.6 Real-world execution

We now show that the real-world protocol is just as secure as Hybrid 3—recall that the security properties we care about include consistency, validity, and termination.

Lemma 12

If there is some \(\text {p.p.t.}\) \(({\mathcal {A}}, {{\mathcal {Z}}})\) that causes the relevant security properties to be broken in the real world with probability \(\mu \), then there is some \(\text {p.p.t.}\) \({\mathcal {A}} '\) such that \(({\mathcal {A}} ', {{\mathcal {Z}}})\) can cause the relevant security properties to be broken in Hybrid 3 with probability at least \(\mu \).

Proof

We construct the following \({\mathcal {A}} '\):

  • \({\mathcal {A}} '\) obtains CRSes for the NIZK and the commitment scheme from its \(\mathcal {F}_{\mathrm{mine}} \) and forwards them to \({\mathcal {A}} \). \({\mathcal {A}} '\) also forwards the PKI it learns from \(\mathcal {F}_{\mathrm{mine}}\) to \({\mathcal {A}} \).

  • Whenever \({\mathcal {A}} \) corrupts some node, \({\mathcal {A}} '\) does the same with its \(\mathcal {F}_{\mathrm{mine}}\), and forwards whatever learned to \({\mathcal {A}} \).

  • Whenever \({\mathcal {A}} \) sends some message to an honest node, for any portion of the message that is a “mined message” of any type, let \(({\mathsf{m}}, \rho , \pi )\) denote this mined message—we assume that \({\mathsf{m}} \) contains the purported miner of this message denoted i.

    • \({\mathcal {A}} '\) checks the validity of \(\pi \) and that \(\rho < D\) (D is either \(D_0\) or \(D_1\) depending on the message’s type; ignore the message if the checks fail;

    • if the purported sender i is an honest node and node i has not successfully mined \({\mathsf{m}} \) with \(\mathcal {F}_{\mathrm{mine}}\), record a forgery event and simply ignore this message. Otherwise, continue with the following steps.

    • if the purported sender i is a corrupt node: \({\mathcal {A}} '\) issues a corresponding mining attempt to \(\mathcal {F}_{\mathrm{mine}} \) on behalf of i with the corresponding \(\rho \) and \(\pi \) if no such mining attempt has been made before;

    • Finally, \({\mathcal {A}} '\) forwards \({\mathsf{m}} \) to the destined honest on behalf of the corrupt sender.

  • Whenever \({\mathcal {A}} '\) receives some message from an honest node (of Hybrid 3): for every portion of the message that is a “mined message” of any type, at this point \({\mathcal {A}} '\) must have heard from \(\mathcal {F}_{\mathrm{mine}}\) the corresponding \(\rho \), and \(\pi \) terms. \({\mathcal {A}} '\) augments the message with these terms and forwards the resulting message to \({\mathcal {A}} \).

Note that conditioned on \(\textsf {view}\)s (determined by all randomness of the execution) with no forgery event then either the relevant bad events occur both in Hybrid 3 and the real-world execution, or occur in neither. For \(\textsf {view}\)s with forgery events, it is not difficult to see that if Hybrid 3 (on this \(\textsf {view}\)) does not incur the relevant bad events, then neither would the real-world execution (for this\(\textsf {view}\)). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abraham, I., Chan, TH.H., Dolev, D. et al. Communication complexity of byzantine agreement, revisited. Distrib. Comput. 36, 3–28 (2023). https://doi.org/10.1007/s00446-022-00428-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00446-022-00428-8

Keywords

Navigation