Skip to main content
Log in

Set-Linearizable Implementations from Read/Write Operations: Sets, Fetch &Increment, Stacks and Queues with Multiplicity

  • Published:
Distributed Computing Aims and scope Submit manuscript

Abstract

This work consideres asynchronous shared memory systems in which any number of processes may crash. It identifies relaxations of fetch & increment, queues, sets and stacks that can be non-blocking or wait-free implemented using only Read/Write operations, without Read-After-Write synchronization patterns. Set-linearizability, a generalization of linearizability designed to specify concurrent behaviors, is used to formally express these relaxations and precisely identify the subset of executions which preserve the original sequential behavior. The specifications allow for an item to be returned more than once by different operations, but only in case of concurrency; we call such a relaxation multiplicity. Hence, these definitions give rise to new notions and new objects where concurrency explicitly appears in the specification of the objects. As far as we know, this work is the first to provide relaxations of objects with consensus number two which can be implemented using only Read/Write registers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. The terms “object” and “concurrent data structure” are considered as synonyms.

  2. High contention however can make Read/Write operations slower.

  3. Some authors use lock-free instead of non-blocking.

  4. Although the object can be Read/Write implemented without them.

  5. Non-blocking or wait-free implementing a (non-relaxed) set under this assumptions still requires the use of Read-Modify-Write operations.

  6. Although the proof set-linearizes only executions with no pending operations, the example considers an execution with a pending operation to make the discussion more complete.

  7. It is an open question if there are wait-free linearizable queue implementations using only objects with consensus number two (see [4, 11, 16, 32, 33]).

References

  1. Afek, Y., Attiya, H., Dolev, D., Gafni, E., Merritt, M., Shavit, N.: Atomic snapshots of shared memory. J. ACM 40(4), 873–890 (1993)

    Article  MATH  Google Scholar 

  2. Afek, Y., Gafni, E., Morrison, A.: Common2 extended to stacks and unbounded concurrency. Distrib. Comput. 20(4), 239–252 (2007)

    Article  MATH  Google Scholar 

  3. Afek, Y., Korland, G., Yanovsky, E.: Quasi-linearizability: relaxed consistency for improved concurrency. In: Proceedings of 14th International Conference on Principles of Distributed Systems, (OPODIS’10), Springer LNCS 6490, pp. 395-410 (2010)

  4. Afek, Y., Weisberger, E., Weisman, H.: A completeness theorem for a class of synchronization objects. In: Proceedings of the 12th ACM Symposium on Principles of Distributed Computing (PODC’93), ACM Press, pp. 159–170 (1993)

  5. Alistarh, D., Brown, T., Kopinsky, J., Li, J., Nadiradze, G.: Distributionally linearizable data structures. In: Proceedings of the 30th Symposium on Parallelism in Algorithms and Architectures (SPAA’15), ACM Press, pp. 133–142 (2018)

  6. Alistarh, D., Kopinsky, J., Li, J., Shavit, N.: The SprayList: a scalable relaxed priority queue. In: Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP’15), ACM Press, pp. 11–20 (2015)

  7. Aspnes, J., Attiya, H., Censor-Hillel, K., Ellen, F.: Limited-use atomic snapshots with polylogarithmic step complexity. J. ACM 62(1), 3:1-3:22 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  8. Attiya, H., Guerraoui, R., Hendler, D., Kuznetsov, P.: The complexity of obstruction-free implementations. J. ACM, 56(4), Article 24, 33 pp (2009)

  9. Attiya, H., Guerraoui, R., Hendler, D., Kuznetsov, P., Michael, M.M., Vechev, M.T.: Laws of order: expensive synchronization in concurrent algorithms cannot be eliminated. In: Proceedings of the 38th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’11), ACM Press, pp. 487-498 (2011)

  10. Attiya, H., Herlihy, M., Rachman, O.: Atomic snapshots using lattice agreement. Distrib. Comput. 8(3), 121–132 (1995)

    Article  MATH  Google Scholar 

  11. Attiya, H., Castañeda, A., Hendler, D.: Nontrivial and universal helping for wait-free queues and stacks. J. Parallel Distrib. Comput. 121, 1–14 (2018)

    Article  MATH  Google Scholar 

  12. Castañeda, A., Piña M.A.: Fully read/write fence-free work-stealing with multiplicity. In: Proceedings of the 35th International Symposium on Distributed Computing (DISC’21), LIPIcs Vol. 209, pp. 16:1–16:20 (2021)

  13. Castañeda, A., Rajsbaum, S., Raynal, M.: Unifying concurrent objects and distributed tasks: interval-linearizability. J. ACM, 65(6), Article 45, 42 pp (2018)

  14. Castañeda, A., Rajsbaum, S., Raynal, M.: Relaxed queues and stacks from read/write operations. In: Proceedings of the 24th International Conference Principles of Distributed Systems (OPODIS’20), LIPIcs Vol. 184, pp. 13:1-13:19 (2020)

  15. Castañeda, A., Rajsbaum, S., Raynal, M.: A Linearizability-based Hierarchy for Concurrent Specifications. To appear in Communications of the ACM, (2022)

  16. Eisenstat, D.: Two-enqueuer queue in Common2. arXiV:0805.O444v2, 12 pp (2009)

  17. Ellen, F., Hendler, D., Shavit, N.: On the inherent sequentiality of concurrent objects. SIAM J. Comput. 41(3), 519–536 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Goubault, E., Ledent, J., Mimram S.: Concurrent specifications beyond linearizability. In: Proceedings of the 22nd International Conference on Principles of Distributed Systems (OPODIS’18), LIPIcs Vol. 125, pp. 28:1-28:16 (2018)

  19. Haas, A., Henzinger, T.A., Holzer, A., Kirsch, Ch.M, Lippautz, M., Payer, H., Sezgin, A., Sokolova, A., Veith, H.: Local linearizability for concurrent set-type data structures. In: Proceedings of the 27th International Conference on Concurrency Theory (CONCUR’16), LIPIcs Vol. 59, pp. 6:1–6:15 (2016)

  20. Haas, A., Lippautz, M., Henzinger, T.A., Payer, H., Sokolova, A., Kirsch, C.M., Sezgin, A.: Distributed queues in shared memory: multicore performance and scalability through quantitative relaxation. In: Computing Frontiers Conference (CF’13), ACM Press, pp. 17:1–17:9 (2013)

  21. Hemed, N., Rinetzky, N., Vafeiadis, V.: Modular verification of concurrent-aware linearizability. In: Proceedings of the 29th International Conference on Distributed Computing (DISC’15), Springer LNCS 9363, pp. 371–387 (2015)

  22. Hendler, D., Shavit, N., Yerushalmi, L.: A scalable lock-free stack algorithm. J. Parallel Distrib. Comput. 70(1), 1–12 (2010)

    Article  MATH  Google Scholar 

  23. Henzinger, T.A., Kirsch, C.M., Payer, H., Sezgin, A., Sokolova, A.: Quantitative relaxation of concurrent data structures. In: Proceedings of the 40th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’13), ACM Pres, pp. 17:1–17:9 (2013)

  24. Herlihy, M.P.: Wait-free synchronization. ACM Trans. Program. Lang. Syst. 13(1), 124–149 (1991)

    Article  Google Scholar 

  25. Herlihy, M.P., Shavit, N.: The Art of Multiprocessor Programming. Morgan Kaufmann, 508 pp, ISBN 978-0-12-370591-4 (2008)

  26. Herlihy, M.P., Wing, J.M.: Linearizability: a correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12(3), 463–492 (1990)

  27. Imbs, D., Raynal, M.: Help when needed, but no more: efficient read/write partial snapshot. J. Parallel Distrib. Comput. 72(1), 1–13 (2012)

    Article  MATH  Google Scholar 

  28. Kirsch, C.M., Lippautz, Payer, H.: Fast and scalable lock-free FIFO queues. In: Proceedings of the 12th International Conference on Parallel Computing Technologies (PaCT’13), Springer LNCS 7979, pp. 208–223 (2013)

  29. Kirsch, C.M., Payer, H., Röck, H., Ana Sokolova, A.: Performance, scalability, and semantics of concurrent FIFO queues. In: Proceedings of 12th International Conference on Algorithms and Architectures for Parallel Processing (ICAAPP’12), Springer LNCS 7439, pp. 273–287 (2012)

  30. Lamport, L.: On interprocess communication, Part I: basic formalism. Distrib. Comput. 1(2), 77–85 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  31. Li, Z.: Non-blocking implementations of queues in asynchronous distributed shared-memory systems. Tech Report, Department of Computer Science, University of Toronto (2001)

  32. Matei, D.: A single-enqueuer wait-free queue implementation. In: Proceedings of the 18th International Conference on Distributed Computing (DISC’04), Springer LNCS 3274, pp. 132–143 (2004)

  33. Matei, D., Brodsky, A., Ellen, F.: Restricted stack implementations. In: 19th International Conference on Distributed Computing (DISC’05), Springer LNCS 3724, pp. 137–151 (2005)

  34. Michael, M.M., Vechev, M.T., Saraswat, S.A.: Idempotent work stealing. In: Proceedings of the 14th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP’09), ACM Press, pp. 45–54 (2009)

  35. Moir, M., Shavit, N.: Concurrent data structures. Handbook of Data Structures and Applications, chapter 47, Chapman and hall/CrC Press, 33 pp (2007)

  36. Morrison, A., Afek, Y.: Fence-free work stealing on bounded TSO processors. In: Proceedings of the 19th ACM Architectural Support for Programming Languages and Operating Systems (ASPLOS’14), ACM Press, pp. 413–426 (2014)

  37. Neiger, G.: Brief announcement: Set linearizability. In: Proceedings of the 13th Annual ACM Symposium on Principles of Distributed Computing (PODC’94), ACM Press, p. 396 (1994)

  38. Nguyen, D., Lenharth, A., Pingali, K.: A lightweight infrastructure for graph analytics. In: Proceedings of the 24th ACM Symposium on Operating Systems Principles (SOSP’13), ACM Press, pp. 456–471 (2013)

  39. Payer, H., Röck, H., Kirsch, M.M., Sokolova, A.: Brief announcement: Scalability versus semantics of concurrent FIFO queues. In: Proceedings of the 30th ACM Symposium on Principles of Distributed Computing (PODC’11), ACM Press, pp. 331–332 (2011)

  40. Rajsbaum, S., Raynal, M.: Mastering concurrent computing through sequential thinking. Commun. ACM 63(1), 78–87 (2020)

    Article  Google Scholar 

  41. Raynal, M.: Concurrent Programming: Algorithms, Principles and Foundations. Springer, 515 pp, ISBN 978-3-642-32026-2 (2013)

  42. Rihani, H., Sanders, P., Dementiev, R.: Brief Announcement: MultiQueues: simple relaxed concurrent priority queues. In: Proceedings of the 327th ACM on Symposium on Parallelism in Algorithms and Architectures (SPAA’12), ACM Press, pp. 80–82 (2015)

  43. Shavit, N.: Data structures in the multicore age. Commun. ACM 54(3), 76–84 (2011)

    Article  Google Scholar 

  44. Shavit, N., Taubenfeld, G.: The computability of relaxed data structures: queues and stacks as examples. Distrib. Comput. 29(5), 395–407 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  45. Talmage, E., Welch, J.L.: Relaxed data types as consistency conditions. Algorithms 11(5), 61 (2018)

  46. Taubenfeld, G.: Synchronization Algorithms and Concurrent Programming. Pearson Education/Prentice Hall, 423 pp, ISBN 0-131-97259-6 (2006)

  47. Zhou, T., Michael, M.M., Spear, M.F.: A practical, scalable, relaxed priority queue. In: Proceedings of the 48th International Conference on Parallel Processing (ICPP’19), pp. 57:1–57:10 (2019)

Download references

Acknowledgements

We would like to express our gratitude to anonymous reviewers for their valuable comments, which greatly improved the earlier version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Armando Castañeda.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version of the results in this paper was presented in OPODIS 2020 [14]. Supported by UNAM-PAPIIT grants IN108723 and IN106520, and by the French projects ByBloS (ANR-20-CE25-0002-01) and PriCLeSS (ANR-10-LABX-07-81) devoted to the design of modular building blocks for distributed application.

Missing proofs steps

Missing proofs steps

This section contains the missing proof steps in the Proof of Theorem 3 in Sect. 6.

Any linearization \(\textsf{SetLin} (F)\) implies a set-linearization \(\textsf{SetLin} (E)\). Given any linearization \(\textsf{SetLin} (F)\) of F, a set-linearization \(\textsf{SetLin} (E)\) of E can be obtained as follows: every Pop operation removed from E, is added to the concurrency class of \(\textsf{SetLin} (F)\) with the Pop operation returning the same item. To prove the claim, consider two operations \(\textsf{op}\) and \(\textsf{op}'\) of E such that \(\textsf{op} <_E \textsf{op}'\). We will argue that the concurrency class of \(\textsf{op}\) appears before the concurrency class of \(\textsf{op}'\) in \(\textsf{SetLin} (E)\), namely, \(\textsf{op} <_{\textsf{SetLin} (E)} \textsf{op}'\), from which follows that \(\textsf{SetLin} (E)\) is indeed a set-linearization of E (as E and \(\textsf{SetLin} (E)\) have the same set of operations and with the same responses, by construction, and \(\textsf{SetLin} (E)\) is a set-sequential execution of a stack with multiplicity, since \(\textsf{SetLin} (F)\) is a sequential execution of the stack). We have four cases:

  1. 1.

    \(\textsf{op}\) and \(\textsf{op}'\) appear in F. We have that \(\textsf{op} <_F \textsf{op}'\), and hence \(\textsf{op} <_{\textsf{SetLin} (F)} \textsf{op}'\), since \(\textsf{SetLin} (F)\) is a linearization of F. By definition of \(\textsf{SetLin} (E)\), \(\textsf{op} <_{\textsf{SetLin} (E)}~\textsf{op}'\).

  2. 2.

    \(\textsf{op}\) appears in F and \(\textsf{op}'\) does not appear in F. It must be that \(\textsf{op}'\) is a Pop operation that returns an item, say x. Let \(\textsf{op}''\) be the Pop operation of F that returns x. As already explained, in E, the invo-cation-response interval of \(\textsf{op}'\) contains the in-vocation-response interval of \(\textsf{op}''\). Then, \(\textsf{op} <_E \textsf{op}''\), which implies that \(\textsf{op} <_F \textsf{op}''\). Thus, \(\textsf{op} <_{\textsf{SetLin} (F)} \textsf{op}''\), since \(\textsf{SetLin} (F)\) is a linearization of F, and then \(\textsf{op} <_{\textsf{SetLin} (E)} \textsf{op}''\), by definition of \(\textsf{SetLin} (E)\). Since \(\textsf{op}'\) belongs to the concurrency class of \(\textsf{op}''\) in \(\textsf{SetLin} (E)\), we finally have \(\textsf{op} <_{\textsf{SetLin} (E)} \textsf{op}'\).

  3. 3.

    \(\textsf{op}\) does not appear in F and \(\textsf{op}'\) appears in F. Then, \(\textsf{op}\) is a Pop operation that returns an item, say x; let \(\textsf{op}''\) be the Pop operation of F that returns x. In E, the responses of \(\textsf{op}\) and \(\textsf{op}''\) occur at the same Take set-step in Line 06 (recall that invocations and responses of operations are identified with their firsts and lasts steps), and hence \(\textsf{op}'' <_E \textsf{op}'\), which implies that \(\textsf{op}'' <_F \textsf{op}'\). Thus, \(\textsf{op}'' <_{\textsf{SetLin} (F)} \textsf{op}'\), since \(\textsf{SetLin} (F)\) is a linearization of F, and then \(\textsf{op}'' <_{\textsf{SetLin} (E)} \textsf{op}'\), by definition of \(\textsf{SetLin} (E)\). Since \(\textsf{op}\) belongs to the concurrency class of \(\textsf{op}''\) in \(\textsf{SetLin} (E)\), we have \(\textsf{op} <_{\textsf{SetLin} (E)} \textsf{op}'\).

  4. 4.

    \(\textsf{op}\) and \(\textsf{op}'\) does not appear in F. In this case \(\textsf{op}\) and \(\textsf{op}'\) are Pop operations that return distinct items, say x and y, respectively. Let \(\textsf{op}''\) and \(\textsf{op}'''\) be the Pop operations of F that return x and y, respectively. Using a similar reasoning as in the previous two cases, we can show that \(\textsf{op}'' <_{\textsf{SetLin} (E)} \textsf{op}'''\), and then \(\textsf{op} <_{\textsf{SetLin} (E)} \textsf{op}'\), as \(\textsf{op}\) and \(\textsf{op}'\) belong to the concurrency classes of \(\textsf{op}''\) and \(\textsf{op}'''\) in \(\textsf{SetLin} (E)\), respectively.

Any execution F of Set-Seq-Stack can be transformed in an execution H of Seq-Stack. To formalizes the transformation, some definitions are presented, considering the state of the shared base object at the end of F. Let \(Top\_sets\) itself denote its content at the end of F. For each \(q \ge 1\), let \(S_q\) be the set containing every item x such that \(\textsf{Push} (x)\) appears in F and the operation obtains q in its step in Line 01. We have that: (1) for every \(\textsf{Push} (x)\) in F, there is a set \(S_q\) containing x, by definition of the sets \(S_q\); (2) for every \(q \ge Top\_set\), \(S_q\) is empty, by de specification of readable Fetch &Inc with multiplicity; (3) for every \(q <Top\_set\), \(S_q\) is non-empty, by de specification of readable Fetch &Inc with multiplicity; (4) for every two distinct \(q, r < Top\_set\), \(S_q\) and \(S_r\) are disjoint, since every item is pushed at most once, by assumption. For each \(x \in S_q\), let set(x) be the integer q. Also let \(|S_{\le q}|\) denote the sum \(|S_1| + |S_2| + \ldots + |S_q|\), and let \(|S_0|\) denote 0. From now on, we consider only sets \(S_q\) with \(1 \le q <Top\_set\). Each such set defines a segment of \(Items[1, 2, \ldots , |S_{\le Top\_set - 1}|]\), whose length equals the cardinality of the set. Specifically, \(S_q\) defines the segment \(Items[|S_{\le {q-1}}|+1, \ldots , |S_{\le {q}}|]\); alternatively, we say that the segment is defined by \(S_q\).

We now assign each item in \(S_q\) to and entry in the segment defined by \(S_q\). To do so, we define a strict partial order \(\prec \) over the items in \(S_q\), and then extend it to a total order from which the assignment is directly obtained. We define \(\prec \) by considering the steps \(Sets[q].\textsf{Take} \) in F, that correspond to Line 06. Let e be any of these steps that returns an item, say x, and let (Tr) be the state of the set Sets[q] right before e. Then, \(x \in T\). For every \(y \in T\) distinct to x, we add the relation \(y \prec x\). We do the same with any such step e. All \(Sets[q].\textsf{Take} \) steps that return \((\emptyset , \cdot )\) do not add any relation. We have that \(\prec \) is irreflexive (as every item is pushed at most one, by assumption) and antisymmetric (as once \(y \prec x\) is set, x is removed from Sets[q] and never pushed again). Consider the transitive extension of \(\prec \), and consider any linear extension of the resulting strict partial order. By abuse of notation, let \(\prec \) denote that total order. Let us denote the items in \(S_q\) as \(x_0, x_1, \ldots , x_{|S_q|-1}\) such that \(x_0 \prec x_1 \prec \ldots \prec x_{|S_q|-1}\). Then, each \(x_i\) is assigned to \(Items[|S_{\le {q-1}}|+1+i]\); namely, \(x_0\) is assigned to \(Items[|S_{\le {q-1}}|+1]\), \(x_1\) to \(Items[|S_{\le {q-1}}|+2]\), and so on, up to \(x_{|S_q|-1}\) that is assigned to \(Items[|S_{\le {q-1}}|+1+|S_q|-1] = Items[|S_{\le {q}}|]\). For each \(x_i \in S_q\), let \(id(x_i)\) denote \(|S_{\le {q-1}}|+1+i\), i..e. the index of Items that is assigned to.

We are now ready to explain in detail how F is transformed into H, arguing during the explanation that H is actually an execution of Seq-Stack (Fig. 3). We take any operation in F and replace its steps with steps of Seq-Stack with consistent responses.

  • Push operations. Let e denote any set-step \(Tos\_top\).

    \( \mathsf{Fetch \& Inc} \) in F. Then, e contains steps that correspond to Line 01 and are of some concurrent Push operations of F. Observe that the set \(S_q\) as defined above contains the items pushed by these operations, where q is the value that the Fetch &Inc ’s return in e (hence e contains as many Fetch &Inc operations as the number of items in \(S_q\), and the state of \(Top\_sets\) right before e is q). Let \(x_0, x_0, \ldots , x_{|S_q|-1}\) denote the items in \(S_q\) such that \(id(x_1)< id(x_2)< \ldots < id(x_{|S_q|-1})\). By definition, \(id(x_i) = |S_{\le {q-1}}|+1+i\). Now, in H, e is replaced with a sequence of steps \( Top.\mathsf{Fetch \& Inc} \), each of them corresponding to a step in Line 01 of Seq-Stack (Fig. 3). Concretely, e is replaced with the next sequence with \(|S_q|\) steps:

    $$ \begin{aligned}&\langle Top.\mathsf{Fetch \& Inc} ():id(x_0) \rangle ,\\&\langle Top.\mathsf{Fetch \& Inc} ():id(x_1) \rangle ,\\&\vdots \\&\langle Top.\mathsf{Fetch \& Inc} ():id(x_{|S_q|-1}) \rangle , \end{aligned}$$

    where \( \langle Top.\mathsf{Fetch \& Inc} ():id(x_i) \rangle \) corresponds to the step of \(\textsf{Push} (x_i)\) in Line 01, in H. Observe that in H, Top’s content before the sequence that replaces e is \(id(x_0) = |S_{\le {q-1}}|+1\), and its content after the sequence is \(id(x_{|S_q|-1})+1 = |S_{\le {q}}|+1\), which is consistent with the specifications of Fetch &Inc. Thus, Top follows the specification of Fetch &Inc in H. Consider any \(\textsf{Push} (x)\) in F. We just said how its step in Line 01 is replaced in H. We now deal with its step \(Sets[set(x)].\textsf{Put} (x)\) corresponding to Line 02, which will be denoted e: the step is simply replaced with \(\langle Items[id(x)].\textsf{Write} (x): \textsf{true} \rangle \); this step is of \(\textsf{Push} (x)\) in H and corresponds to Line 02 of Seq-Stack (Fig. 3), and will be denoted H(e). Note that in H, the step of \(\textsf{Push} (x)\) corresponding to Line 01 obtains id(x), as defined above, and hence it is consistent with Seq-Stack that \(\textsf{Push} (x)\) writes in Items[id(x)]. Also observe that in F, the set in Sets[set(x)] contains x right after e happens, and in H, the segment of Items defined by \(S_{set(x)}\) contains x in its entry Items[id(x)] right after H(e) happens.

  • Pop operations. Consider any Pop operation of F, and let e be its \(Top\_sets.\textsf{Read} \) step corresponding to Line 04. In H, this step is replaced with \(\langle Top.\textsf{Read} (): |S_{\le q}|+1 \rangle \), where \(q+1\) is the content of \(Tops\_sets\) at e; let H(e) denote that step of H, which corresponds to a step of Line 04 of Seq-Stack (Fig. 3). We argue that the response of e(H) in H is consistent. Prior to e in F, there are q set-steps that correspond to \( Top\_sets.\mathsf{Fetch \& Inc} \) in Line 01; these q set-steps are replaced in H as mentioned above. By construction, the content of Top in H right before e(H) is \(|S_{\le {q}}|+1\), from which follows that the response of H(e) in H is consistent. Consider now any step \(Sets[q].\textsf{Take} \) of the same Pop operation, which corresponds to a step in Line 06. Let e denote that step. Consider the set \(S_q\) as defined before, and let \(x_0, x_0, \ldots , x_{|S_q|-1}\) denote the items in \(S_q\) such that \(id(x_1)< id(x_2)< \ldots < id(x_{|S_q|-1})\). Suppose first that e returns y. By definition, \(y \in S_q\). Let \(x_i\) such that \(y = x_i\). Then, e is replaced in H with the next sequence of steps, which will be denoted H(e):

    $$\begin{aligned}&\langle Items[id(x_{|S_q|-1})].\textsf{Swap} (\bot ):\bot \rangle ,\\&\vdots \\&\langle Items[id(x_{i+1})].\textsf{Swap} (\bot ):\bot \rangle ,\\&\langle Items[id(x_i)].\textsf{Swap} (\bot ):x_i \rangle . \end{aligned}$$

    These steps correspond to steps in Line 06 of Seq-Stack (Fig. 3), all of them of the same Pop operation in H. The set Sets[q] contains \(x_i\) right before e, in F. Therefore, the response in the step \(\langle Items[id(x_i)].\textsf{Swap} (\bot ):x_i \rangle \) in H(e) is consistent; note that \(x_i\) does not appear in the segment defined by \(S_q\) after H(e) in H. We argue that the responses in the steps in H(e) before \(\langle Items[id(x_i)].\textsf{Swap} (\bot ):x_i \rangle \) are consistent too. The only way one of these responses is inconsistent is if in F, Sets[q] contains an item z right before e, and \(z = x_j\) with \(id(x_j) > id(x_i)\). Namely, \(\langle Items[id(x_j)].\textsf{Swap} (\bot ):\bot \rangle \) appears before \(\langle Items[id(x_i)].\textsf{Swap} (\bot ):x_i \rangle \) in H(e) (and hence the response in \(\langle Items[id(x_j)].\textsf{Swap} (\bot ):\bot \rangle \) is incorrect in H because \(Items[id(x_j)] = x_j\) right before e(H)). But this cannot happen, because \(x_j \prec x_i\), by definition of of the relation \(\prec \), and hence, the index assigned to \(x_j\) is smaller than the index assigned to \(x_i\), namely, \(id(x_j) < id(x_i)\), from which follows that \(\langle Items[id(x_j)].\textsf{Swap} (\bot ):\bot \rangle \) does not even appear in H(e). Thus, all responses in H(e) that replace e are consistent. To conclude the transformation, suppose that e returns \((\emptyset ,\cdot )\). Then, e is replaced in H with the sequence of steps:

    $$\begin{aligned}&\langle Items[id(x_{|S_q|-1})].\textsf{Swap} (\bot ):\bot \rangle ,\\&\vdots \\&\langle Items[id(x_1)].\textsf{Swap} (\bot ):\bot \rangle ,\\&\langle Items[id(x_0)].\textsf{Swap} (\bot ):\bot \rangle . \end{aligned}$$

    As Sets[q] is empty right before e in F, all responses in the sequence are consistent. The transformation from F to H is complete.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Castañeda, A., Rajsbaum, S. & Raynal, M. Set-Linearizable Implementations from Read/Write Operations: Sets, Fetch &Increment, Stacks and Queues with Multiplicity. Distrib. Comput. 36, 89–106 (2023). https://doi.org/10.1007/s00446-022-00440-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00446-022-00440-y

Keywords

Navigation