text
stringlengths 2
132k
| source
dict |
|---|---|
be the ( + 1)st operation of L. Let op be invoked by process p. The following cases are possible: – op is a read (a): the snapshot taken at op ’s linearization point contains all successful transfers concerning a in L. By the induction hypothesis, the resulting balance is non-negative. – op is a failed transfer (a, b, x): the snapshot taken at the linearization point of op contains all successful transfers concerning a in L. By the induction hypothesis, the bal-ance corresponding to this snapshot non-negative. By the algorithm, the balance is less than x.– op is a successful transfer (a, b, x). Let L s , s ≤ , be the prefix of L that only contains operations linearized before the moment of time when q o has taken the snap-shot just before accessing kC o.As before accessing kC o, q went through all preced-ing k-consensus objects associated with a and put the decided values in AS , L s must include all outgoing trans- # 123 R. Guerraoui et al. fer operations for a. Furthermore, L s includes a subset of all incoming transfers on a. Thus, balance (a, L k ) ≤ balance (a, L ).By the algorithm, as op = transfer (a, b, x) succeeds, we have balance (a, L k ) ≥ x. Thus, balance (a, L ) ≥ x and the resulting balance in L+1 is non-negative. Thus, H is linearizable. Theorem 2 A k-shared asset-transfer object has consensus number k. Proof It follows directly from Lemma 1 that k-shared asset-transfer has consensus number at least k. Moreover, it follows from Lemma 2 that k-shared asset-transfer has consensus number at most k. Thus, the consensus number of k-shared asset-transfer is exactly k. It is worth noting that Theorem 2 implies
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
that, in a more demanding model than shared memory with crash faults (particularly the Byzantine message passing model), solving consensus among k processes is necessary, but not necessar-ily sufficient for implementing k-shared asset transfer. # 5 Asset transfer in message passing We established our theoretical results in a shared memory system with crash failures, proving that consensus is not nec-essary for implementing an asset transfer system. Moreover, a natural generalization of such a system where up to k pro-cesses have access to atomic operations on the same account has consensus number k. These results help us understand the level of difficulty of certain problems in the domain of cryptocurrencies. While suggesting that agreement might be unnecessary in the Byzantine message passing model (on which most blockchain systems are based) as well, they do not prove it. To show that agreement is indeed not needed for blockchain systems, we need an algorithm for the Byzantine message passing model, i.e., one where processes (some of which are potentially malicious) communicate by exchang-ing messages. In this section we present such an extension of our results to the message passing system with Byzantine failures. Instead of consensus, we rely on a secure broadcast primi-tive that provides reliable delivery with weak (weaker than FIFO) ordering guarantees . Using secure broadcast, pro-cesses announce their transfers to the rest of the system. We establish dependencies among these transfers that induce a partial order. Using a method similar to (a weak form of) vec-tor clocks , we make sure that each process applies the transfers respecting this dependency-induced partial order. In a nutshell, a transfer only depends on all previous trans-fers outgoing from the same account, and on a subset of transfers incoming to that account. Each transfer operation corresponds to one invocation of secure broadcast by the
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
corresponding account’s owner. The message being broad-cast carries, in addition to the transfer itself, references to the transfer’s dependencies. As secure broadcast only provides liveness if the sender is correct, faulty processes might not be able to perform any transfers. However, due to secure broadcast’s delivery prop-erties, the correct processes will always have a consistent view of the system state. Every transfer operation only entails a single invocation of secure broadcast and our algorithm does not send any addi-tional messages. Our algorithm inherits the complexity from the underlying secure broadcast implementation, and there are plenty of such algorithms optimizing complexity metrics for various settings [10,11,23,27,36,37,46]. In practice, as shown by our implementation and evaluation , our solu-tion outperforms a consensus-based one by 5 x in throughput and while maintaining a sub-second latency. The implementation can be further extended to solve the k-shared asset transfer problem. As we showed in Sect. 4, agreement among a subset of the processes is necessary in such a case. We associate each account (owned by up to k processes) with a Byzantine-fault tolerant state machine replication (BFT SMR) service executed by the owners of that account. The BFT service assigns sequence numbers to transfers which the processes then submit to an extended version of the above-mentioned transfer protocol. As long as the replicated state machine is safe and live, we guarantee that every invoked transfer operation eventually returns. If an account becomes compromised (i.e., the safety or live-ness of the BFT SMR is violated), only the corresponding account might lose liveness. In other words, outgoing trans-fers from the compromised account may not return, while safety and liveness of transfers from “healthy” accounts are always guaranteed. We describe this extension in more details later (Sect. 6). In the rest of this section, we give details
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
on the Byzan-tine message passing model, adapt our asset-transfer object accordingly (Sect. 5.1) and present its broadcast-based implementation (Sect. 5.2). 5.1 Byzantine message passing model A process is Byzantine if it deviates from the algorithm it is assigned, either by halting prematurely, in which case we say that the process is crashed , or performing actions that are not prescribed by its algorithm, in which case we say that the pro-cess is malicious . Malicious processes can perform arbitrary actions, except for ones that involve subverting cryptographic primitives (e.g. inverting secure hash functions). A process is called faulty if it is either crashed or malicious. A process is correct if it is not faulty and benign if it is not malicious. # 123 The consensus number of a cryptocurrency (extended version) Note that every correct process is benign, but not necessarily vice versa. We only require that the transfer system behaves cor-rectly towards benign processes, regardless of the behavior of Byzantine ones. Informally, we want to require that no benign process can be a victim of a double-spending attack, i.e., every execution appears to benign processes as a cor-rect sequential execution, respecting the original execution’s real-time ordering . For the sake of efficiency, in our algorithm, we slightly relax the last requirement—while still preventing double-spending. We require that successful transfer operations invoked by benign processes constitute a legal sequen-tial history that preserves the real-time order. A read or a failed transfer operation invoked by a benign process p can be “outdated”—it can be based on a stale state of p’s balance. Informally, one can view the system require-ments as linearizability for successful transfers and sequential consistency for failed transfers and reads. As progress (liveness) guarantees, we require that every opera-tion invoked by a correct process eventually completes. In
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
a nutshell, sequential consistency resembles lineariz-ability with real-time constraints removed. Each process observes a sequence of events consistent with the sequential specification, but the effects of other processes’ invocations need not respect real-time order with respect to the process’ own invocations. One can argue that this relaxation incurs little impact on the system’s utility, since all incoming trans-fers are eventually applied. We discuss a fully linearizable implementation at the end of this section. Definition 1 Let E be any execution of an implementation and H be the corresponding history. Let ops (H ) denote the set of operations in H that were executed by correct processes in E. An asset-transfer object in message passing guarantees that each invocation issued by a correct process is followed by a matching response in H , and that there exists ¯H , a completion of H , such that: (1) Let ¯H t denote the sub-history of successful transfers of ¯H performed by correct processes and ≺t ¯H be the subset of ≺ ¯H restricted to operations in ¯H t .Then there exists a legal sequential history S such that (a) for every benign process p, ¯H t | p = S| p and (b) ≺t ¯H ⊆≺ S .(2) For every benign process p, there exists a legal sequential history S p such that: – ops ( ¯H ) ⊆ ops (S p), and – S p| p = ¯H | p.Notice that property (2) implies that every update in H that affects the account of a correct process p is eventually included in p’s “local” history and, therefore, will reflect reads and transfer operations subsequently performed by p. 5.2 Asset transfer implementation in message passing Instead of consensus, we rely on a secure broadcast prim-itive that is strictly weaker than consensus and
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
has a fully asynchronous implementation. It provides uniform reliable delivery despite Byzantine faults and so-called source order among delivered messages. The source order property, being weaker than FIFO, guarantees that messages from the same source are delivered in the same order by all correct pro-cesses. More precisely, the secure broadcast primitive we use in our implementation has the following properties : – Integrity: A benign process delivers a message m from a process p at most once and, if p is benign, only if p previously broadcast m.– Agreement: If processes p and q are correct and p deliv-ers m, then q delivers m.– Validity: If a correct process p broadcasts m, then p delivers m.– Source order: If p and q are benign and both deliver m from r and m′ from r, then they do so in the same order. Operation. To perform a transfer t x , a process p securely broadcasts a message with the transfer details: the arguments of the transfer operation (see Sect. 2.2) and some metadata. The metadata includes a per-process sequence number of t x and references to the dependencies of t x . The dependencies are transfers incoming to p that must be known to any process before applying t x . These dependencies impose a causal rela-tion between transfers that must be respected when transfers are being applied. For example, suppose that process p makes a transfer t x to process q and q, after observing t x , performs another transfer t x ′ to process r. q’s broadcast message will contain t x ′, a local sequence number, and a reference to t x .Any process (not only r) will only evaluate the validity of t x ′ after having applied t x . This approach is similar
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
to using vector clocks for implementing causal order among events . To ensure the authenticity of operations—so that no pro-cess is able to debit another process’s account—we assume that processes sign all their messages before broadcasting them. In practice, similar to Bitcoin and other transfer sys-tems, every process possesses a public-private key pair that allows only p to securely initiate transfers from its corre-sponding account. For simplicity of presentation, we omit this mechanism in the algorithm pseudocode. Figure 4 describes the full algorithm implementing asset-transfer in a Byzantine-prone message passing system. Each process p maintains, for each process q, an integer seq [q] # 123 R. Guerraoui et al. > Fig. 4 Consensusless transfer system based on secure broadcast. Code for every process p reflecting the number of transfers which process q initiated and which process p has validated and applied. Process p also maintains, for every process q, an integer rec [q] reflecting the number of transfers process q has initiated and process p has delivered (but not necessarily applied). Additionally, there is also a list hist [q] of transfers which involve process q. We say that a transfer operation involves a process q if that transfer is either outgoing or incoming on the account of q. Each process p maintains as well a local variable deps . This is a set of transfers incoming for p that p has applied since the last successful outgoing transfer—and thus is cleared after each invocation of broadcast (line 5). 3Finally, the set toValidate contains delivered transfers that are pending validation (i.e., have been delivered, but not yet validated). To perform a transfer operation, process p first checks the balance of its own account, and if the balance is insufficient, returns false (line 3). Otherwise, process p broadcasts a mes-sage with this
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
operation via the secure broadcast primitive (line 4). This message includes the three basic arguments of a transfer operation as well as seq [ p] + 1 and depen-dencies deps . Each correct process in the system eventually delivers this message via secure broadcast (line 8). Note that, given the assumption of no process executing more than one concurrent transfer, every process waits for delivery of its own message before initiating another broadcast. This effec-tively turns the source order property of secure broadcast into FIFO order. Upon delivery, process p checks this message for well-formedness (lines 9 and 10), and then adds it to the set of messages pending validation. We explain the validation procedure later. Once a transfer passes validation (the predicate in line 13 is satisfied), process p applies this transfer on the local state. Applying a transfer means that process p adds this transfer and its dependencies to the history of the outgoing (line 15) account. If the transfer is incoming for local process p, it is also added to deps , the set of current dependencies for p (line 18). If the transfer is outgoing for p, i.e., it is the currently pending transfer operation invoked by p, then the response true is returned (line 20). To perform a read (a) operation for account a, process p simply computes the balance of this account based on the local history hist [a] (line 28). Validation. Before applying a transfer op from some process q, process p validates op via the Valid function (lines 21–26). To be valid, op must satisfy four conditions. The first condition is that process q (the issuer of transfer op )must be the owner of the outgoing account for op (line 23). Second, any preceding transfers that process q issued must have been
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
validated (line 24). Third, the balance of account q must not drop below zero (line 25). Finally, the reported dependencies of op (encoded in h of line 26) must have been validated and exist in hist [q]. Lemma 3 In any infinite execution of the algorithm (Fig. 4), every operation performed by a correct process eventually completes. > 3The algorithm would stay correct even without ever clearing deps .Clearing it, however, avoids broadcasting ever-growing messages. # 123 The consensus number of a cryptocurrency (extended version) Proof A transfer operation that fails or a read operation invoked by a correct process returns immediately (lines 3 and 7, respectively). Consider a transfer operation T invoked by a correct process p that succeeds (i.e., passes the check in line 2), so p broadcasts a message with the transfer details using secure broadcast (line 4). By the validity property of secure broadcast, p eventually delivers the message (via the secure broadcast callback, line 8) and adds it to the toValidate set. By the algorithm, this message includes a set deps of operations (called h, line 9) that involve p’s account. This set includes transfers that process p delivered and validated after issuing the prior successful outgoing transfer (or since system ini-tialization if there is no such transfer) but before issuing T (lines 4 and 5). As process p is correct, it operates on its own account, respects the sequence numbers, and issues a transfer only if it has enough balance on the account. Thus, when it is delivered by p, T must satisfy the first three conditions of the Valid predicate (lines 23–25). Moreover, by the algorithm, all dependencies (labeled h in function Valid ) included in T are in the history hist [ p] and, thus the fourth validation condition (line 26) also
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
holds. Thus, p eventually validates T and completes the opera-tion by returning true in line 20. Theorem 3 The algorithm in Fig. 4 implements an asset-transfer object type. Proof Fix an execution E of the algorithm, let H be the cor-responding history. Let V denote the set of all messages that were delivered (line 8) and validated (line 23) at correct processes in E.Every message m = [ (q, d, y, s), h] ∈ V is put in hist [q] (line 15). We define an order ⊆ V × V as follows. For m = [ (q, d, y, s), h] ∈ V and m′ = [ (r, d′, y′, s′), h′ ] ∈ V,we have m m′ if and only if one of the following conditions holds: – q = r and s < s′,– (r, d′, y′, s′) ∈ h, or – there exists m′′ ∈ V such that m m′′ and m′′ m′.By the source order property of secure broadcast (see Sect. 5.2), correct processes p and r deliver messages from any process q in the same order. By the algorithm in Fig. 4, a message from q with a sequence number i is added by a correct pro-cess to toValidate set only if the previous message from q added to toValidate had sequence number i − 1 (line 10). Furthermore, a message m = [ (q, d, y, s), h] is validated at a correct process only if all messages in h have been pre-viously validated (line 26). Therefore, is acyclic and thus can be extended to a total order. Let S be the sequential history constructed from any such total order on messages in V in which every message m =[(q, d, y, s), h] is replaced with the invocation-response
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
pair transfer (q, d, y); true .By construction, every operation transfer (q, d, y) in S is preceded by a sequence of transfers that ensure that the bal-ance of q does not drop below y (line 25). In particular, S includes all outgoing transfers from the account of q per-formed previously by q itself. Additionally S may order some incoming transfer to q that did not appear at hist [q] before the corresponding (q, d, y, s) has been added to it. But these “unaccounted” operations may only increase the balance of q and, thus, it is indeed legal to return true .By construction, for each correct process p, S respects the order of successful transfers issued by p. Thus, the sub-sequence of successful transfers in H “looks” linearizable to the correct processes: H , restricted to successful transfers witnessed by the correct processes, is consistent with a legal sequential history S.Let p be a correct process in E. Now let Vp denote the set of all messages that were delivered (line 8) and validated (line 23) at p in E. Let p be the subset of restricted to the elements in Vp. Obviously, p is cycle-free and we can again extend it to a total order. Let S p be the sequential history build in the same way as S above. Similarly, we can see that S p is legal and, by construction, consistent with the local history of all operations of p (including reads and failed transfers). By Lemma 3, every operation invoked by a correct process eventually completes. Thus, E indeed satisfies the properties of an asset-transfer object type. 5.3 Linearizability As stated above, our implementation provides linearizability for successful transfers and sequential consistency for reads and failed transfers. The fundamental reason for this is
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
that the balance operation only consults local state without syn-chronizing with other processes. What prevents the implementation from being fully lin-earizable is the following situation. A process p returns from a successful transfer operation o after having deliv-ered its own broadcast message m. Then, some other process q invokes an operation o′ that queries p’s balance before q delivers m. Thus, even though q invokes o′ after o returned, the result of q’s operation does not reflect the effect of o,violating linearizability. Intuitively, to achieve linearizability, the above algorithm needs to be extended by an acknowledgment mechanism that delays returning from an operation until the state observed/produced by the operation is guaranteed to be vis-ible by other processes. In a practical setting, this has the disadvantage of adding extra communication delays. # 123 R. Guerraoui et al. # 6 k-shared asset transfer in message passing Our message-passing asset-transfer implementation can be naturally extended to the k-shared case, when some accounts are owned by up to k processes. Functionally, k-shared accounts are similar to “multi-sig” accounts used in Bit-coin and other blockchain systems, where a transfer must be signed by 1 out of k account owners. The concept of multi-sig accounts, however, only extends an application-level definition of what constitutes a valid transfer and has no implications on its ordering with respect to other transfers. This section provides an informal insight in how k-shared could be implemented in a Byzantine message passing model. k-shared BFT SMR service. As we showed in Sect. 4, a purely asynchronous implementation of a k-shared asset-transfer does not exist, even in the benign shared-memory environment. To circumvent this impossibility, we assume that every account is associated with a Byzantine fault-tolerant state-machine replication (SMR) service (e.g., PBFT ) that is used by the account’s owners to order
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
their outgo-ing transfers. In a nutshell, the function of an SMR service is to receive inputs from clients, establish a total order on those inputs and compute functions of the resulting sequence that it returns to clients. In particular, the account owners submit their issued transfers to the service, which assigns monoton-ically increasing sequence numbers to those transfers. The service itself can be implemented by the owners themselves, acting both as clients , submitting requests, and replicas , reaching agreement on the order in which the requests must be served. As long as more than two thirds of the owners are correct, the service is safe , in particular, no sequence number is assigned to more than one transfer. Moreover, under the condition that the owners can eventu-ally communicate within a bounded message delay, every request submitted by a correct owner is guaranteed to be eventually assigned a sequence number . One can argue that it is much more likely that this assumption of eventual synchrony holds for a bounded set of owners, rather than for the whole set of system participants. Furthermore, communi-cation complexity of such an implementation is polynomial in k and not in N , the number of processes. Account order in secure broadcast. Consider even the case where the threshold of one third of Byzantine owners is exceeded, where the account may become blocked or, even worse, compromised. In this case, different owners may be able to issue two different transfers associated with the same sequence number. This issue can be mitigated by a slight modification of the classical secure broadcast algorithm . In addition to the properties of Integrity, Validity and Agreement of secure broadcast, the modified algorithm can implement the prop-erty of account order , generalizing the source order property (Sect. 5.2). Assume that
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
each broadcast message is equipped with a sequence number (generated by the BFT service, as we will see below). – Account order: If a benign process p delivers messages m (with sequence number s) and m′ (with sequence num-ber s′) such that m and m′ are associated with the same account and s < s′, then p delivers m before m′.Informally, the implementation works as follows. The sender sends the message (containing the account reference and the sequence number) it wants to broadcast to all and waits until it receives acknowledgements from a quorum of more than two thirds of the processes. A message with a sequence number s associated with an account a is only acknowledged by a benign process if the last message asso-ciated with a it delivered had sequence number s − 1. Once a quorum is collected, the sender sends the message equipped with the signed quorum to all and delivers the message. This way, the benign processes deliver the messages associated with the same account in the same order. If the owners of an account send conflicting messages for the same sequence number, the account may block. However, and most impor-tantly, even a compromised account is always prevented from double spending. Liveness of operations on a compromised account is not guaranteed, but safety and liveness of other operations remains unaffected. Putting it all together. The resulting k-shared asset trans-fer system is a composition of a collection of BFT services (one per account), the modified secure broadcast protocol (providing the account-order property), and a slightly modi-fied protocol in Fig. 4. To issue a transfer operation t on an account a it owns, a process p first submits t to the associated BFT service to get a sequence number. Assuming that the account is not com-promised and
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
the service is consistent, the transfer receives a unique sequence number s. Note that the decided tuple (a, t, s) should be signed by a quorum of owners: this will be used by the other processes in the system to ensure that the sequence number has been indeed agreed upon by the own-ers of a. The process executes the protocol in Fig. 4, with the only modification that the sequence number seq is now not computed locally but adopted from the BFT service. Intuitively, as the transfers associated with a given account are processed by the benign processes in the same order, the resulting protocol ensures that the history of successful transfers is linearizable. On the liveness side, the protocol ensures that every transfer on a non-compromised account is guaranteed to complete. # 123 The consensus number of a cryptocurrency (extended version) # 7 Related work Many systems address the problem of asset transfers, be they for a permissioned (private, with a trusted external access control mechanism) [4,28,32] or permissionless (public, prone to Sybil attacks) setting [2,17,24,34,39,45]. Decen-tralized systems for the public setting are open to the world. To prevent malicious parties from overtaking the system, these systems rely on Sybil-proof techniques, e.g., proof-of-work , or proof-of-stake . The above-mentioned solutions, whether for the permissionless or the permissioned environment, seek to solve consensus. They must inevitably rely on synchrony assumptions or randomization. Avalanche , for example, relaxes consensus by allowing an explicit (small) failure probability. By sidestepping consensus, we can provide a deterministic and asynchronous implementa-tion. It is worth noting that many of those solutions allow for more than just transfers, and support richer operations on the system state—so-called smart contracts. Our paper focuses on the original asset transfer problem, as defined by Nakamoto , and we do not address
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
smart contracts, for certain forms of which consensus is indeed necessary. However, our approach allows for arbitrary operations, if those operations affect groups of the participants that can solve consensus among themselves. Potential safety or live-ness violations of those operations (in case this group gets compromised) are confined to the group and do not affect the rest of the system. In the blockchain ecosystem, a lot of work has been devoted to avoid a totally ordered chain of transfers. The idea is to replace the totally ordered linear structure of a blockchain with that of a directed acyclic graph (DAG) for structuring the transfers in the system. Notable systems in this spirit include Byteball , Vegvisir , Corda , Nano , or the GHOST protocol . Even if these sys-tems use a DAG to replace the classic blockchain, they still employ consensus. We can also use a DAG to characterize the relation between transfers, but we do not resort to solving consensus to build the DAG, nor do we use the DAG to solve consen-sus. More precisely, we can regard each account as having an individual history. Each such history is managed by the corresponding account owner without depending on a global view of the system. Histories are loosely coupled through a causality relation established by dependencies among trans-fers. The important insight that an asynchronous broadcast-style abstraction suffices for transfers appears in the literature as early as 2002, due to Pedone and Schiper . Duan et. al. introduce efficient Byzantine fault-tolerant protocols for storage and also build on this insight. So does recent work by Gupta on financial transfers which seems closest to ours; the proposed algorithm is based on similar principles as some implementations of secure broadcast [36,37]. To the best of our knowledge, however, we are
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
the first to formally define the asset transfer problem as a shared object type, study its consensus number, and propose algorithms building on top of standard abstractions that are amenable to a real deployment in cryptocurrencies. > Funding Open Access funding provided by EPFL Lausanne. Declarations > Conflict of interest The authors declare that they have no conflict of interest. > Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adap-tation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indi-cate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copy-right holder. To view a copy of this licence, visit ons.org/licenses/by/4.0/. # References > 1. Abraham, I., Gueta, G., Malkhi, D., Alvisi, L., Kotla, R., Mar-tin, J.P.: Revisiting fast practical byzantine fault tolerance (2017). arXiv:1712.01367 2. Abraham, I., Malkhi, D., Nayak, K., Ren, L., Spiegelman, A.: Solida: a blockchain protocol based on reconfigurable byzantine consensus (2016). arXiv:1612.02916 3. Afek, Y., Attiya, H., Dolev, D., Gafni, E., Merritt, M., Shavit, N.: Atomic snapshots of shared memory. JACM 40 (4), 873–890 (1993) 4. Androulaki, E., Barger, A., Bortnikov, V., Cachin, C., Chris-tidis, K., De Caro, A., Enyeart, D., Ferris, C., Laventman, G., Manevich, Y., Muralidharan, S., Murthy, C., Nguyen, B., Sethi, M., Singh, G., Smith, K., Sorniotti, A., Stathakopoulou, C., Vukoli´ c,
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
M., Cocco, S.W., Yellick, J.: Hyperledger fabric: A distributed operating system for permissioned blockchains. In: Proceedings of the Thirteenth EuroSys Conference, EuroSys ’18, pp. 30:1– 30:15. ACM, New York (2018). 3190538 5. Antoniadis, K., Guerraoui, R., Malkhi, D., Seredinschi, D.A.: State machine replication is more expensive than consensus. In: Schmid, U., Widder, J. (eds.) 32nd International Symposium on Distributed Computing (DISC 2018), Leibniz International Proceedings in Informatics (LIPIcs), vol. 121, pp. 7:1–7:18. Schloss Dagstuhl– Leibniz-Zentrum fuer Informatik, Dagstuhl (2018). org/10.4230/LIPIcs.DISC.2018.7. volltexte/2018/9796 6. Attiya, H., Welch, J.L.: Sequential consistency versus linearizabil-ity. ACM TOCS 12 (2), 91–122 (1994) # 123 R. Guerraoui et al. 7. Bentov, I., Gabizon, A., Mizrahi, A.: Cryptocurrencies without proof of work. In: Clark, J., Meiklejohn, S., Ryan, P.Y., Wallach, D., Brenner, M., Rohloff, K. (eds.) Financial Cryptography and Data Security, pp. 142–157. Springer, Berlin (2016) 8. Berman, P., Garay, J.A., Perry, K.J.: Towards optimal distributed consensus. In: 30th Annual Symposium on Foundations of Com-puter Science (FOCS), pp. 410–415. IEEE, Research Triangle Park (1989). 9. Bonneau, J., Miller, A., Clark, J., Narayanan, A., Kroll, J.A., Felten, E.W.: SoK: research perspectives and challenges for bitcoin and cryptocurrencies (2015) 10. Bracha, G., Toueg, S.: Asynchronous consensus and broadcast protocols. JACM 32 (4), 824–840 (1985). 4221.214134 11. Cachin, C., Kursawe, K., Petzold, F., Shoup, V.: Secure and efficient asynchronous broadcast protocols. In: Kilian, J. (ed.) Advances in Cryptology—CRYPTO 2001, pp. 524–541. Springer, Berlin (2001) 12. Cachin, C., Vukoli´ c, M.: Blockchains consensus protocols in the wild (2017). arXiv:1707.01873 13. Castro, M., Liskov, B.: Practical byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. 20 (4), 398–461 (2002). 14. Churyumov, A.: Byteball: a decentralized system for storage and transfer of value (2016). 15. Clement, A., Wong, E.L., Alvisi, L., Dahlin, M., Marchetti, M.: Making
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
byzantine fault tolerant systems tolerate byzantine faults. In: NSDI, pp. 153–168. USENIX Association, Berkeley (2009) 16. Collins, D., Guerraoui, R., Komatovic, J., Kuznetsov, P., Monti, M., Pavlovic, M., Pignolet, Y., Seredinschi, D., Tonkikh, A., Xygkis, A.: Online payments by merely broadcasting messages. In: 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), pp. 26–38 (2020). 10.1109/DSN48063.2020.00023 17. Decker, C., Seidel, J., Wattenhofer, R.: Bitcoin meets strong con-sistency. In: Proceedings of the 17th International Conference on Distributed Computing and Networking, ICDCN ’16, pp. 13:1– 13:10. ACM, New York (2016). 2833321 18. Duan, S., Reiter, M.K., Zhang, H.: Beat: asynchronous BFT made practical. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, pp. 2028– 2041. ACM, New York (2018). 3243812 19. Eyal, I., Gencer, A.E., Sirer, E.G., Van Renesse, R.: Bitcoin-NG: a scalable blockchain protocol (2016) 20. Fidge, J.C.: Timestamps in message-passing systems that preserve partial ordering. In: Proceedings of the 11th Australian Computer Science Conference, vol. 10(1), pp. 56–66 (1988) 21. Fischer, M.J., Lynch, N.A., Paterson, M.S.: Impossibility of dis-tributed consensus with one faulty process. JACM 32 (2), 374–382 (1985) 22. Garay, J., Kiayias, A., Leonardos, N.: The bitcoin backbone pro-tocol: analysis and applications. In: Oswald, E., Fischlin, M. (eds.) Advances in Cryptology—EUROCRYPT 2015, pp. 281– 310. Springer, Berlin (2015) 23. Garay, J.A., Katz, J., Kumaresan, R., Zhou, H.S.: Adaptively secure broadcast, revisited. In: Proceedings of the 30th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Com-puting, PODC ’11, pp. 179–186. ACM, New York, NY, USA (2011). 24. Gilad, Y., Hemo, R., Micali, S., Vlachos, G., Zeldovich, N.: Algorand: scaling byzantine agreements for cryptocurrencies. In: Proceedings of the 26th Symposium on Operating Systems Prin-ciples, SOSP ’17, pp. 51–68. ACM, New York (2017). org/10.1145/3132747.3132757 25. Guerraoui, R., Pavlovic,
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
M., Seredinschi, D.A.: Blockchain proto-cols: the adversary is in the details. In: Symposium on Foundations and Applications of Blockchain (2018). 2018/assets/papers/fab18_submission_04.pdf 26. Gupta, S.: A non-consensus based decentralized financial transac-tion processing model with support for efficient auditing. Master’s thesis, Arizona State University, USA (2016) 27. Hadzilacos, V., Toueg, S.: Fault-tolerant broadcasts and related problems. In: Mullender, S.J. (ed.) Distributed Systems, Chap. 5, pp. 97–145. Addison-Wesley, Reading (1993) 28. Hearn, M.: Corda: a distributed ledger. Corda Technical White Paper (2016) 29. Herlihy, M.: Wait-free synchronization. TOPLAS 13 (1), 123–149 (1991) 30. Herlihy, M.P., Wing, J.M.: Linearizability: a correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12 (3), 463–492 (1990). 31. Jayanti, P., Toueg, S.: Some results on the impossibility, universal-ity, and decidability of consensus. In: Segall, A., Zaks, S. (eds.) Distributed Algorithms, pp. 69–84. Springer, Berlin (1992) 32. Karlsson, K., Jiang, W., Wicker, S., Adams, D., Ma, E., van Renesse, R., Weatherspoon, H.: A Partition-Tolerant Blockchain for the Internet-of-Things. In: 38th IEEE International Confer-ence on Distributed Computing Systems, ICDCS 2018, Vienna, Austria, IEEE Computer Society, pp. 1150–1158. org/10.1109/ICDCS.2018.00114, KarlssonJWAMRW18.bib 33. Kogias, E.K., Jovanovic, P., Gailly, N., Khoffi, I., Gasser, L., Ford, B.: Enhancing bitcoin security and performance with strong con-sistency via collective signing. USENIX Security (2016) 34. Kokoris-Kogias, E., Jovanovic, P., Gasser, L., Gailly, N., Syta, E., Ford, B.: OmniLedger: Asecure, Scale-Out, Decentralized Ledger via Sharding IEEE. Symp. Secur. Priv. 583–598 (2018) 35. LeMahieu, C.: Nano: a feeless distributed cryptocurrency net-work. Nano. (2018). Accessed 18 Jan 2019 36. Malkhi, D., Merritt, M., Rodeh, O.: Secure reliable multicast pro-tocols in a WAN. In: ICDCS (1997) 37. Malkhi, D., Reiter, M.K.: A high-throughput secure reliable multi-cast protocol. J. Comput. Secur. 5(2), 113–128 (1997). org/10.3233/JCS-1997-5203 38. Mazieres, D.: The stellar consensus protocol: a federated model for
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
internet-level consensus. Stellar Development Foundation (2015) 39. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system (2008) 40. Pedone, F., Schiper, A.: Handling message semantics with generic broadcast protocols. Distrib. Comput. 15 (2), 97–107 (2002) 41. Rapoport, P., Leal, R., Griffin, P., Sculley, W.: The ripple protocol (2014) 42. Sompolinsky, Y., Zohar, A.: Accelerating Bitcoin’s transaction pro-cessing: fast money grows on trees, not chains. IACR Cryptology ePrint Archive 881 (2013) 43. Sousa, J., Bessani, A., Vukolic, M.: A byzantine fault-tolerant ordering service for the hyperledger fabric blockchain platform. In: IEEE DSN (2018) 44. Szabo, N.: Formalizing and securing relationships on public net-works. First Monday 2(9) (1997) 45. Team-Rocket: Snowflake to avalanche: a novel metastable con-sensus protocol family for cryptocurrencies. White Paper (2018). Revision: 05/16/2018 21:51:26 UTC 46. Toueg, S.: Randomized byzantine agreements. In: Proceedings of the Third Annual ACM Symposium on Principles of Distributed Computing, PODC ’84, pp. 163–178. ACM, New York (1984). # 123 The consensus number of a cryptocurrency (extended version) 47. Vukoli´ c, M.: The quest for scalable blockchain fabric: proof-of-work vs. BFT replication. In: International Workshop on Open Problems in Network Security (2015) 48. Wood, G.: Ethereum: a secure decentralized generalized transac-tion ledger. White paper (2015) Publisher’s Note Springer Nature remains neutral with regard to juris-dictional claims in published maps and institutional affiliations. # 123
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
Title: The Consensus Number of a Cryptocurrency URL Source: Markdown Content: > arXiv:1906.05574v1 [cs.DC] 13 Jun 2019 # The Consensus Number of a Cryptocurrency ∗ # (Extended Version) # Rachid Guerraoui rachid.guerraoui@epfl.ch EPFL Lausanne, Switzerland # Petr Kuznetsov petr.kuznetsov@telecom-paristech.fr LTCI, Télécom Paris, IP Paris Paris, France # Matteo Monti matteo.monti@epfl.ch EPFL Lausanne, Switzerland # Matej Pavlovič matej.pavlovic@epfl.ch EPFL Lausanne, Switzerland # Dragos-Adrian Seredinschi † dragos-adrian.seredinschi@epfl.ch EPFL Lausanne, Switzerland ## ABSTRACT Many blockchain-based algorithms, such as Bitcoin, implement a decentralized asset transfer system , often referred to as a cryptocur-rency . As stated in the original paper by Nakamoto, at the heart of these systems lies the problem of preventing double-spending ; this is usually solved by achieving consensus on the order of transfers among the participants. In this paper, we treat the asset transfer problem as a concurrent object and determine its consensus num-ber , showing that consensus is, in fact, not necessary to prevent double-spending. We first consider the problem as defined by Nakamoto, where only a single process—the account owner—can withdraw from each account. Safety and liveness need to be ensured for correct account owners, whereas misbehaving account owners might be unable to perform transfers. We show that the consensus number of an asset transfer object is 1. We then consider a more general k-shared asset transfer object where up to k processes can atomically withdraw from the same account, and show that this object has consensus number k.We establish our results in the context of shared memory with benign faults, allowing us to properly understand the level of diffi-culty of the asset transfer problem. We also translate these results in the message passing setting with Byzantine players, a model that is more relevant in practice. In this model, we describe an asynchronous Byzantine fault-tolerant asset transfer
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
implementa-tion that is both simpler and more efficient than state-of-the-art consensus-based solutions. Our results are applicable to both the permissioned (private) and permissionless (public) setting, as nor-mally their differentiation is hidden by the abstractions on top of which our algorithms are based. ## CCS CONCEPTS • Theory of computation → Distributed algorithms . ## KEYWORDS distributed computing, distributed asset transfer, blockchain, con-sensus > ∗This is an extended version of a conference article, comprising an additional section (Section 6). The conference version of this article appears in the proceedings of the 2019 ACM Symposium on Principles of Distributed Computing (PODC’19), July 29–Au-gust 2, 2019, Toronto, ON, Canada, > †This work has been supported in part by the European ERC Grant 339539 - AOC. ## 1 INTRODUCTION The Bitcoin protocol, introduced in 2008 by Satoshi Nakamoto, im-plements a cryptocurrency : an electronic decentralized asset trans-fer system . Since then, many alternatives to Bitcoin came to prominence. These include major cryptocurrencies such as Ethereum or Ripple , as well as systems sparked from re-search or industry efforts such as Bitcoin-NG , Algorand , ByzCoin , Stellar , Hyperledger , Corda , or Sol-ida . Each alternative brings novel approaches to implementing decentralized transfers, and sometimes offers a more general in-terface (known as smart contracts ) than the original protocol proposed by Nakamoto. They improve over Bitcoin in various as-pects, such as performance, energy-efficiency, or security. A common theme in these protocols, whether they are for basic transfers or smart contracts , is that they seek to imple-ment a blockchain —a distributed ledger where all the transfers in the system are totally ordered. Achieving total order among mul-tiple inputs (e.g., transfers) is fundamentally a hard task, equiva-lent to solving consensus [25, 27]. Consensus , a central prob-lem in distributed computing, is
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
known for its notorious difficulty. It has no deterministic solution in asynchronous systems if just a single participant can fail . Partially synchronous consen-sus algorithms are tricky to implement correctly [1, 12, 15] and face tough trade-offs between performance, security, and energy-efficiency [5, 8, 23, 46]. Not surprisingly, the consensus module is a major bottleneck in blockchain-based protocols [26, 42, 46]. A close look at Nakamoto’s original paper reveals that the central issue in implementing a decentralized asset transfer sys-tem (i.e., a cryptocurrency) is preventing double-spending , i.e., spending the same money more than once . Bitcoin and nu-merous follow-up systems typically assume that total order—and thus consensus—is vital to preventing double-spending . There seems to be a common belief, indeed, that a consensus algorithm is essential for implementing decentralized asset transfers [9, 23, 31, 38]. As our main result in this paper, we show that this belief is false. We do so by casting the asset transfer problem as a sequential object type and determining that it has consensus number 1 in Herlihy’s hierarchy . 1 > 1The consensus number of an object type is the maximal number of processes that can solve consensus using only read-write shared memory and arbitrarily many objects of this type. 1Rachid Guerraoui, Petr Kuznetsov, Ma eo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi The intuition behind this result is the following. An asset trans-fer object maintains a set of accounts. Each account is associated with an owner process that is the only one allowed to issue trans-fers withdrawing from this account. Every process can however read the balance of any account. The main insight here is that relating accounts to unique own-ers obviates the need for consensus. It is the owner that decides on the order of transfers from its own account, without the
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
need to agree with any other process—thus the consensus number 1. Other processes only validate the owner’s decisions, ensuring that causal relations across accounts are respected. We describe a simple as-set transfer implementation using atomic-snapshot memory . A withdrawal from an account is validated by relating the with-drawn amount with the incoming transfers found in the memory snapshot. Intuitively, as at most one withdrawal can be active on a given account at a time, it is safe to declare the validated operation as successful and post it in the snapshot memory. We also present a natural generalization of our result to the set-ting in which multiple processes are allowed to withdraw from the same account. A k-shared asset-transfer object allows up to k processes to execute outgoing transfers from the same account. We prove that such an object has consensus number k and thus allows for implementing state machine replication (now often re-ferred to as smart contracts ) among the k involved processes using k-consensus objects . We show that k-shared asset transfer has consensus number k by reducing it to k-consensus (known to have consensus number k) and reducing k-consensus to asset transfer. Having established the relative ease of the asset transfer prob-lem using the shared memory model, we also present a practical solution to this problem in the setting of Byzantine fault-prone pro-cesses communicating via message passing. This setting matches realistic deployments of distributed systems. We describe an asset transfer implementation that does not resort to consensus. Instead, the implementation relies on a secure broadcast primitive that en-sures uniform reliable delivery with only weak ordering guaran-tees [35, 36], circumventing hurdles imposed by consensus. In the k-shared case, our results imply that to execute some form of smart contract involving k users, consensus is only needed among these k
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
nodes and not among all nodes in the system. In particular, should these k nodes be faulty, the rest of the accounts will not be affected. To summarize, we argue that treating the asset transfer problem as a concurrent data structure and measuring its hardness through the lense of distributed computing help understand it and devise better solutions to it. The rest of this paper is organized as follows. We first give the formal definition of the shared memory model and the asset trans-fer object type (Section 2). Then, we show that this object type has consensus number 1 (Section 3). Next, we generalize our result by proving that a k-shared asset transfer object has consensus number k (Section 4). Finally, we describe the implications of our results in the message passing model with Byzantine faults (Sections 5 and 6) and discuss related work (Section 7). ## 2 SHARED MEMORY MODEL AND ASSET-TRANSFER OBJECT TYPE We now present the shared memory model (Section 2.1) and pre-cisely define the problem of asset-transfer as a sequential object type (Section 2.2). ## 2.1 Definitions Processes. We assume a set Π of N asynchronous processes that communicate by invoking atomic operations on shared memory objects. Processes are sequential—we assume that a process never invokes a new operation before obtaining a response from a previ-ous one. Object types. A sequential object type is defined as a tuple T = (Q, q0, O, R, ∆), where Q is a set of states, q0 ∈ Q is an initial state, O is a set of operations, R is a set of responses and ∆ ⊆ Q × Π × O × Q × R is a relation that associates a state, a process identifier and an operation to a set of possible new states and corresponding responses.
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
We assume that ∆ is total on the first three elements. A history is a sequence of invocations and responses, each invo-cation or response associated with a process identifier. A sequential history is a history that starts with an invocation and in which ev-ery invocation is immediately followed with a response associated with the same process. A sequential history is legal if its invoca-tions and responses respect the relation ∆ for some sequence of state assignments. Implementations. An implementation of an object type T is a dis-tributed algorithm that, for each process and invoked operation, prescribes the actions that the process needs to take to perform it. An execution of an implementation is a sequence of events : invo-cations and responses of operations or atomic accesses to shared abstractions. The sequence of events at every process must respect the algorithm assigned to it. Failures. Processes are subject to crash failures (we consider more general Byzantine failures in the next section). A process may halt prematurely, in which case we say that the process is crashed .A process is called faulty if it crashes during the execution. A pro-cess is correct if it is not faulty. All algorithms we present in the shared memory model are wait-free —every correct process eventu-ally returns from each operation it invokes, regardless of an arbi-trary number of other processes crashing or concurrently invoking operations. Linearizability. For each pattern of operation invocations, the execution produces a history , i.e., a sequence of distinct invoca-tions and responses, labelled with process identifiers and unique sequence numbers. A projection of a history H to process p, denoted H |p is the sub-sequence of elements of H labelled with p. An invocation o by a process p is incomplete in H if it is not followed by a response in
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
H |p. A history is complete if it has no incomplete invocations. A completion of H is a history ¯H that is identical to H except that every incomplete invocation in H is either removed or completed by inserting a matching response somewhere after it. > 2The Consensus Number of a Cryptocurrency An invocation o1 precedes an invocation o2 in H , denoted o1 ≺H o2, if o1 is complete and the corresponding response r1 pre-cedes o2 in H . Note that ≺H stipulates a partial order on invoca-tions in H . A linearizable implementation (also said an atomic ob-ject ) of type T ensures that for every history H it produces, there exists a completion ¯H and a legal sequential history S such that (1) for all processes p, ¯H |p = S |p and (2) ≺H ⊆≺ S . Consensus number. The problem of consensus consists for a set of processes to propose values and decide on the proposed values so that no two processes decide on defferent values and every correct process decides. The consensus number of a type T is the maximal number of processes that can solve consensus using atomic objects of type T and read-write registers. ## 2.2 The asset transfer object type Let A be a set of accounts and μ : A → 2Π be an “owner” map that associates each account with a set of processes that are, intuitively, allowed to debit the account. We define the asset-transfer object type associated with A and μ as a tuple (Q, q0, O, R, ∆), where: • The set of states Q is the set of all possible maps q : A → N.Intuitively, each state of the object assigns each account its balance . • The initialization map q0 : A →
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
N assigns the initial bal-ance to each account. • Operations and responses of the type are defined as O = {transfer (a, b, x) : a, b ∈ A , x ∈ N} ∪ { read (a) : a ∈ A} and R = {true , false } ∪ N. • ∆ is the set of valid state transitions. For a state q ∈ Q, a process p ∈ Π, an operation o ∈ O, a response r ∈ R and a new state q′ ∈ Q, the tuple (q, p, o, q′, r ) ∈ ∆ if and only if one of the following conditions is satisfied: – o = transfer (a,b, x)∧ p ∈ μ(a) ∧ q(a) ≥ x ∧ q′(a) = q(a)− x ∧ q′(b) = q(b) + x ∧ ∀c ∈ A \ { a, b} : q′(c) = q(c) (all other accounts unchanged) ∧ r = true ; – o = transfer (a, b, x) ∧ (p < μ(a) ∨ q(a) < x) ∧ q′ = q ∧ r = false ; – o = read (a) ∧ q = q′ ∧ r = q(a).In other words, operation transfer (a, b, x) invoked by process p succeeds if and only if p is the owner of the source account a and account a has enough balance, and if it does, x is transferred from a to the destination account b. A transfer (a, b, x) operation is called outgoing for a and incoming for b; respectively, the x units are called outgoing for a and incoming for b. A transfer is successful if its corresponding response is true and failed if its corresponding response is false . Operation read (a) simply returns the balance of a and leaves the account balances untouched. As in Nakamoto’s
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
original paper , we assume for the mo-ment that an asset-transfer object has at most one owner per ac-count: ∀a ∈ A : |μ(a)| ≤ 1. Later we lift this assumption and con-sider more general k-shared asset-transfer objects with arbitrary owner maps μ (Section 4). For the sake of simplicity, we also restrict ourselves to transfers with a single source account and a single des-tination account. However, our definition (and implementation) of the asset-transfer object type can trivially be extended to support transfers with multiple source accounts (all owned by the same sequential process) and multiple destination accounts. ## 3 ASSET TRANSFER HAS CONSENSUS NUMBER 1 In this section, we show that the asset-transfer type can be wait-free implemented using only read-write registers in a shared mem-ory system with crash failures. Thus, the type has consensus num-ber 1 . Consider an asset-transfer object associated with a set of ac-counts A and an ownership map μ where ∀a ∈ A , |μ(a)| ≤ 1.Our implementation is described in Figure 1. Every process p is as-sociated with a distinct location in an atomic snapshot object storing the set of all successful transfer operations executed by p so far. Since each account is owned by at most one process, all out-going transfers for an account appear in a single location of the atomic snapshot (associated with the owner process). This princi-ple bears a similarity to the implementation of a counter object. Recall that the atomic snapshot (AS) memory is represented as a vector of N shared variables that can be accessed with two atomic operations: update and snapshot . An update operation modifies the value at a given position of the vector and a snapshot returns the state of the whole vector. We implement the read and transfer op-erations as follows. •
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
To read the balance of an account a, the process simply takes a snapshot S and returns the initial balance plus the sum of incoming amounts minus the sum of all outgoing amounts. We denote this number by balance (a, S). As we argue below, the result is guaranteed to be non-negative, i.e., the opera-tion is correct with respect to the type specification. • To perform transfer (a, b, x), a process p, the owner of a,takes a snapshot S and computes balance (a, S). If the amount to be transferred does not exceed balance (a, S), we add the transfer operation to the set of p’s operations in the snap-shot object via an update operation and return true . Other-wise, the operation returns false . > Shared variables: > AS , atomic snapshot, initially {⊥} N > Local variables: > ops p⊆ A × A × N, initially ∅ > Upon transfer (a,b,x) > 1S=AS .snapshot () > 2if p 4ops p=ops p∪ {( a,b,x)} > 5AS .update (ops p) > 6return true > Upon read (a) > 7S=AS .snapshot () > 8return balance (a,S) Figure 1: Wait-free implementation of asset-transfer : code for process p > 3Rachid Guerraoui, Petr Kuznetsov, Ma eo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi Theorem 1. The asset-transfer object type has a wait-free imple-mentation in the read-write shared memory model. Proof. Fix an execution E of the algorithm in Figure 1. Atomic snapshots can be wait-free implemented in the read-write shared memory model . As every operation only involves a finite num-ber of atomic snapshot accesses, every process completes each of the operations it invokes in a finite number of its own steps. Let Ops be the set of: • All invocations of transfer or read in E
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
that returned, and • All invocations of transfer in E that completed the update operation (line 5). Let H be the history of E. We define a completion of H and, for each o ∈ Ops , we define a linearization point as follows: • If o is a read operation, it linearizes at the linearization point of the snapshot operation in line 7. • If o is a transfer operation that returns false , it linearizes at the linearization point of the snapshot operation in line 1. • If o is a transfer operation that completed the update oper-ation, it linearizes at the linearization point of the update operation in line 5. If o is incomplete in H , we complete it with response true .Let ¯H be the resulting complete history and let L be the sequence of complete invocations of ¯H in the order of their linearization points in E. Note that, by the way we linearize invocations, the linearization of a prefix of E is a prefix of L.Now we show that L is legal and, thus, H is linearizable. We pro-ceed by induction, starting with the empty (trivially legal) prefix of L. Let Lℓ be the legal prefix of the first ℓ invocations and op be the (ℓ + 1)st operation of L. Let op be invoked by process p. The following cases are possible: • op is a read (a): the snapshot taken at the linearization point of op contains all successful transfers concerning a in Lℓ.By the induction hypothesis, the resulting balance is non-negative. • op is a failed transfer (a, b, x): the snapshot taken at the lin-earization point of op contains all successful transfers con-cerning a in Lℓ. By the induction hypothesis, the resulting balance is non-negative. • op is a
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
successful transfer (a, b, x): by the algorithm, before the linearization point of op , process p took a snapshot. Let Lk , k ≤ ℓ, be the prefix of Lℓ that only contain operations linearized before the point in time when the snapshot was taken by p.We observe that Lk includes a subset of all incoming trans-fers on a and all outgoing transfers on a in Lℓ. Indeed, as p is the owner of a and only the owner of a can perform outgoing transfers on a, all outgoing transfers in Lℓ were linearized before the moment p took the snapshot within op . Thus, balance (a, Lk ) ≤ balance (a, Lℓ).2 By the algorithm, as op = transfer (a, b, x) succeeds, we have balance (a, Lk ) ≥ x. Thus, balance (a, Lℓ) ≥ x and the result-ing balance in Lℓ+1 is non-negative. Thus, H is linearizable. > 2Analogously to balance (a,S)that computes the balance for account abased on the transfers contained in snapshot S,balance (a,L), if Lis a sequence of operations, computes the balance of account abased on all transfers in L. > Shared variables: > R[i],i∈1, . . . , k,kregisters, initially R[i]=⊥,∀iAT ,k-shared asset-transfer object containing: – an account awith initial balance 2 k > owned by processes 1 , . . . , k > – some account s > Upon propose (v): > 1R[p].wr ite (v) > 2AT .tr ansf er (a,s,2k−p)) > 3return R[AT .r ead (a)] .r ead () Figure 2: Wait-free implementation of consensus among k processes using a k-shared asset-transfer object. Code for process p ∈ { 1, . . . , k }. Corollary 1. The asset-transfer object type has consensus number 1. ## 4 k-SHARED ASSET TRANSFER HAS CONSENSUS NUMBER k We now consider
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
the case with an arbitrary owner map μ. We show that an asset-transfer object’s consensus number is the maximal number of processes sharing an account. More precisely, the con-sensus number of an asset-transfer object is max a ∈A |μ(a)| .We say that an asset-transfer object, defined on a set of accounts A with an ownership map μ, is k-shared iff max a ∈A |μ(a)| = k. In other words, the object is k-shared if μ allows at least one account to be owned by k processes, and no account is owned by more than k processes. We show that the consensus number of any k-shared asset-transfer object is k, which generalizes our result in Corollary 1. We first show that such an object has consensus number at least k by implementing consensus for k processes using only registers and an instance of k-shared asset-transfer . We then show that k-shared asset-transfer has consensus number at most k by reducing it to k-consensus, an object known to have consensus number k . Lemma 1. Consensus has a wait-free implementation for k pro-cesses in the read-write shared memory model equipped with a single k-shared asset-transfer object. Proof. We now provide a wait-free algorithm that solves con-sensus among k processes using only registers and an instance of k-shared asset-transfer . The algorithm is described in Figure 2. In-tuitively, k processes use one shared account a to elect one of them whose input value will be decided. Before a process p accesses the shared account, p announces its input in a register (line 1). Process p then tries to perform a transfer from account a to another ac-count. The amount withdrawn this way from account a is chosen specifically such that: (1) only one transfer operation can ever succeed, and (2) if the transfer
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
succeeds, the remaining balance on a will uniquely identify process p.To satisfy the above conditions, we initialize the balance of account a to 2k and have each process p ∈ { 1, . . . , k } transfer 2k −p (line 2). Note that transfer operations invoked by distinct processes p, q ∈ > 4The Consensus Number of a Cryptocurrency {1, . . . , k } have arguments 2k − p and 2k − q, and 2k − p + 2k − q ≥ 2k − k + 2k − ( k − 1) = 2k + 1. The initial balance of a is only 2k and no incoming transfers are ever executed. Therefore, the first transfer operation to be applied to the object succeeds (no transfer tries to withdraw more then 2k) and the remaining operations will have to fail due to insufficient balance. When p reaches line 3, at least one transfer must have succeeded: (1) either p’s transfer succeeded, or (2) p’s transfer failed due to insufficient balance, in which case some other process must have previously succeeded. Let q be the process whose transfer succeeded. Thus, the balance of account a is 2k − ( 2k − q) = q. Since q performed a transfer operation, by the algorithm, q must have previously written its proposal to the register R[q]. Regardless of whether p = q or p , q,reading the balance of account a returns q and p decides the value of R[q]. To prove that k-shared asset-transfer has consensus number at most k, we reduce k-shared asset-transfer to k-consensus. A k-consensus object exports a single operation propose that, the first k times it is invoked, returns the argument of the first invocation. All subsequent invocations return ⊥. Given that k-consensus is
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
known to have consensus number exactly k , a wait-free algorithm implementing k-shared asset-transfer using only registers and k-consensus objects implies that the consensus number of k-shared asset-transfer is not more than k.The algorithm reducing k-shared asset-transfer to k-consensus is given in Figure 3. Before presenting a formal correctness argu-ment, we first informally explain the intuition of the algorithm. In our reduction, we associate a series of k-consensus objects with every account a. Up to k owners of a use the k-consensus objects to agree on the order of outgoing transfers for a.We maintain the state of the implemented k-shared asset-transfer object using an atomic snapshot object AS . Every process p uses a distinct entry of AS to store a set hist . hist is a subset of all completed outgoing transfers from accounts that p owns (and thus is allowed to debit). For example, if p is the owner of accounts d and e, p’s hist contains outgoing transfers from d and e. Each el-ement in the hist set is represented as (( a,b, x, s, r ), result ), where a, b, and x are the respective source account, destination account, and the amount transferred, s is the originator of the transfer, and r is the round in which the transfer was invoked by the originator. The value of result ∈ { success , failure } indicates whether the transfer succeeds or fails. A transfer becomes “visible” when any process inserts it in its corresponding entry of AS .To read the balance of account a, a process takes a snapshot of AS , and then sums the initial balance q0(a) and amounts of all suc-cessful incoming transfers, and subtracts the amounts of successful outgoing transfers found in AS . We say that a successful transfer tx is
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
in a snapshot AS (denoted by (tx , success ) ∈ AS ) if there exists an entry e in AS such that (tx , success ) ∈ AS [e].To execute a transfer o outgoing from account a, a process p first announces o in a register Ra that can be written by p and read by any other process. This enables a “helping” mechanism needed to ensure wait-freedom to the owners of a . Next, p collects the transfers proposed by other owners and tries to agree on the order of the collected transfers and their results Shared variables: AS , atomic snapshot object for each a ∈ A : Ra [i], i ∈ Π, registers, initially [⊥ , . . . , ⊥] kC a [i], i ≥ 0, list of instances of k-consensus objects Local variables: hist : a set of completed trasfers, initially empty for each a ∈ A : committed a , initially ∅ round a , initially 0 Upon transfer (a, b, x ): > 1 if p 2 return false > 3 t x = (a, b, x, p, round a ) > 4 Ra [p].wr ite (t x ) > 5 collected = collect (a) \ committed a > 6 while t x ∈ collected do > 7 req = the oldest transfer in collected > 8 prop = proposal (req , AS .snapshot ()) > 9 decision = kC a [round a ].pr opose (pr op ) > 10 hist = hist ∪ { decision } > 11 AS .update (hist ) > 12 committed a = committed a ∪ { t : decision = (t, ∗)} > 13 collected = collected \ committed a > 14 round a = round a + 1 > 15 if
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
(t x , success ) ∈ hist then > 16 return true > 17 else > 18 return false Upon read (a): > 19 return balance (a, AS .snapshot ()) collect(a) : > 20 collected = ∅ > 21 for all i = Π do > 22 if Ra [i].r ead () , ⊥ then > 23 collected = collected ∪ { Ra [i].r ead ()} > 24 return collected proposal (( a, b, q, x ), snapshot ): > 25 if balance (a, snapshot ) ≥ x then > 26 prop = (( a, b, q, x ), success ) > 27 else > 28 prop = (( a, b, q, x ), failure ) > 29 return pr op balance(a, snapshot) : > 30 incoming = {t x : t x = (∗ , a, ∗, ∗, ∗) ∧ ( t x , success ) ∈ snapshot } > 31 outgoing = {t x : t x = (a, ∗, ∗, ∗, ∗) ∧ ( t x , success ) ∈ snapshot } > 32 return q0(a) + (Í(∗ ,a,x,∗,∗)∈ incoming x ) − (Í(a,∗,x,∗,∗)∈ outgoing x ) Figure 3: Wait-free implementation of a k-shared asset-transfer object using k-consensus objects. Code for process p. > 5Rachid Guerraoui, Petr Kuznetsov, Ma eo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi using a series of k-consensus objects. For each account, the agree-ment on the order of transfer-result pairs proceeds in rounds. Each round is associated with a k-consensus object which p invokes with a proposal chosen from the set of collected transfers. Since each process, in each round, only invokes the k-consensus object once, no k-consensus object is invoked more than k times and thus each invocation returns a value (and not ⊥). A transfer-result pair as a proposal for the next
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
instance of k-consensus is chosen as follows. Process p picks the “oldest” col-lected but not yet committed operation (based on the round num-ber round a attached to the transfer operation when a process an-nounces it; ties are broken using process IDs). Then p takes a snap-shot of AS and checks whether account a has sufficient balance according to the state represented by the snapshot, and equips the transfer with a corresponding success / failure flag. The resulting transfer-result pair constitutes p’s proposal for the next instance of k-consensus. The currently executed transfer by pro-cess p returns as soon as it is decided by a k-consensus object, the flag of the decided value ( success /failure ) indicating the transfer’s response ( true /false ). Lemma 2. The k-shared asset-transfer object type has a wait-free implementation in the read-write shared memory model equipped with k-consensus objects. Proof. We essentially follow the footpath of the proof of The-orem 1. Fix an execution E of the algorithm in Figure 3. Let H be the history of E.To perform a transfer o on an account a, p registers it in Ra [p] (line 4) and then proceeds through a series of k-consensus objects, each time collecting Ra to learn about the transfers concurrently proposed by other owners of a. Recall that each k-consensus object is wait-free. Suppose, by contradiction, that o is registered in Ra but is never decided by any instance of k-consensus. Eventually, however, o becomes the request with the lowest round number in Ra and, thus, some instance of k-consensus will be only accessed with o as a proposed value (line 9). By validity of k-consensus, this instance will return o and, thus, p will be able to complete o.Let Ops be the set of all complete operations and all transfer
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
op-erations o such that some process completed the update operation (line 11) in E with an argument including o (the atomic snapshot and k-consensus operation has been linearized). Intuitively, we in-clude in Ops all operations that took effect , either by returning a response to the user or by affecting other operations. Recall that every such transfer operation was agreed upon in an instance of k-consensus, let it be kC o . Therefore, for every such transfer oper-ation o, we can identify the process qo whose proposal has been decided in that instance. We now determine a completion of H and, for each o ∈ Ops , we define a linearization point as follows: • If o is a read operation, it linearizes at the linearization point of the snapshot operation (line 19). • If o is a transfer operation that returns false , it linearizes at the linearization point of the snapshot operation (line 8) performed by qo just before it invoked kC o .propose () . • If o is a transfer operation that some process included in the update operation (line 11), it linearizes at the lineariza-tion point of the first update operation in H (line 11) that in-cludes o. Furthermore, if o is incomplete in H , we complete it with response true .Let ¯H be the resulting complete history and let L be the se-quence of complete operations of ¯H in the order of their lineariza-tion points in E. Note that, by the way we linearize operations, the linearization of a prefix of E is a prefix of L. Also, by construction, the linearization point of an operation belongs to its interval. Now we show that L is legal and, thus, H is linearizable. We pro-ceed by induction, starting with the empty (trivially legal) prefix
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
of L. Let Lℓ be the legal prefix of the first ℓ operation and op be the (ℓ + 1)st operation of L. Let op be invoked by process p. The following cases are possible: • op is a read (a): the snapshot taken at op ’s linearization point contains all successful transfers concerning a in Lℓ . By the induction hypothesis, the resulting balance is non-negative. • op is a failed transfer (a, b, x): the snapshot taken at the lin-earization point of op contains all successful transfers con-cerning a in Lℓ . By the induction hypothesis, the balance corresponding to this snapshot non-negative. By the algo-rithm, the balance is less than x. • op is a successful transfer (a, b, x). Let Ls , s ≤ ℓ, be the pre-fix of Lℓ that only contains operations linearized before the moment of time when qo has taken the snapshot just before accessing kC o.As before accessing kC o, q went through all preceding k-consensus objects associated with a and put the decided values in AS , Ls must include all outgoing transfer opera-tions for a. Furthermore, Ls includes a subset of all incoming transfers on a. Thus, balance (a, Lk ) ≤ balance (a, Lℓ ).By the algorithm, as op = transfer (a, b, x) succeeds, we have balance (a, Lk ) ≥ x. Thus, balance (a, Lℓ ) ≥ x and the result-ing balance in Lℓ+1 is non-negative. Thus, H is linearizable. Theorem 2. A k-shared asset-transfer object has consensus num-ber k. Proof. It follows directly from Lemma 1 that k-shared asset-transfer has consensus number at least k. Moreover, it follows from Lemma 2 that k-shared asset-transfer has consensus number at most k. Thus, the consensus number of k-shared asset-transfer is exactly k. ## 5 ASSET
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
TRANSFER IN MESSAGE PASSING We established our theoretical results in a shared memory system with crash failures, proving that consensus is not necessary for implementing an asset transfer system. Moreover, a natural gener-alization of such a system where up to k processes have access to atomic operations on the same account has consensus number k.These results help us understand the level of difficulty of certain problems in the domain of cryptocurrencies. To achieve a practical impact, however, we need an algorithm deployable as a distributed system in a realistic setting. Arguably, such a setting is one where > 6The Consensus Number of a Cryptocurrency processes (some of which are potentially malicious) communicate by exchanging messages. In this section we overview an extension of our results to the message passing system with Byzantine failures. Instead of consen-sus, we rely on a secure broadcast primitive that provides reliable delivery with weak (weaker than FIFO) ordering guarantees . Using secure broadcast, processes announce their transfers to the rest of the system. We establish dependencies among these trans-fers that induce a partial order. Using a method similar to (a weak form of) vector clocks , we make sure that each process applies the transfers respecting this dependency-induced partial order. In a nutshell, a transfer only depends on all previous transfers outgo-ing from the same account, and on a subset of transfers incoming to that account. Each transfer operation corresponds to one invo-cation of secure broadcast by the corresponding account’s owner. The message being broadcast carries, in addition to the transfer itself, references to the transfer’s dependencies. As secure broadcast only provides liveness if the sender is cor-rect, faulty processes might not be able to perform any transfers. However, due to secure broadcast’s delivery properties, the correct processes will always have a consistent view of
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
the system state. Every transfer operation only entails a single invocation of se-cure broadcast and our algorithm does not send any additional messages. Our algorithm inherits the complexity from the un-derlying secure broadcast implementation, and there is plenty of such algorithms optimizing complexity metrics for various set-tings [10, 11, 21, 25, 35, 36, 45]. In practice, as shown by a prelimi-nary deployment based on a naive quadratic secure broadcast im-plementation in a medium-sized system (up to 100 processes), our solution outperforms a consensus-based one by 1.5x to 6x in throughput and by up to 2x in latency. The implementation can be further extended to solve the k-shared asset transfer problem. As we showed in Section 4, agree-ment among a subset of the processes is necessary in such a case. We associate each account (owned by up to k processes) with a Byzantine-fault tolerant state machine replication (BFT) service ex-ecuted by the owners of that account. The BFT service assigns sequence numbers to transfers which the processes then submit to an extended version of the above-mentioned transfer protocol. As long as the replicated state machine is safe and live, we guaran-tee that every invoked transfer operation eventually returns. If an account becomes compromised (i.e., the safety or liveness of the BFT is violated), only the corresponding account might lose live-ness. In other words, outgoing transfers from the compromised ac-count may not return, while safety and liveness of transfers from “healthy” accounts are always guaranteed. We describe this exten-sion in more details later (Section 6). In the rest of this section, we give details on the Byzantine mes-sage passing model, adapt our asset-transfer object accordingly (Sec. 5.1) and present its broadcast-based implementation (Sec. 5.2). ## 5.1 Byzantine Message Passing Model A process is Byzantine if it deviates from the algorithm
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
it is as-signed, either by halting prematurely, in which case we say that the process is crashed , or performing actions that are not prescribed by its algorithm, in which case we say that the process is malicious .Malicious processes can perform arbitrary actions, except for ones that involve subverting cryptographic primitives (e.g. inverting se-cure hash functions). A process is called faulty if it is either crashed or malicious. A process is correct if it is not faulty and benign if it is not malicious. Note that every correct process is benign, but not necessarily vice versa. We only require that the transfer system behaves correctly to-wards benign processes, regardless of the behavior of Byzantine ones. Informally, we want to require that no benign process can be a victim of a double-spending attack, i.e., every execution appears to benign processes as a correct sequential execution, respecting the original execution’s real-time ordering . For the sake of efficiency, in our algorithm, we slightly relax the last requirement—while still preventing double-spending. We require that successful transfer operations invoked by benign pro-cesses constitute a legal sequential history that preserves the real-time order. A read or a failed transfer operation invoked by a be-nign process p can be “outdated”—it can be based on a stale state of p’s balance. Informally, one can view the system requirements as linearizability for successful transfers and sequential consis-tency for failed transfers and reads. One can argue that this re-laxation incurs little impact on the system’s utility, since all incom-ing transfers are eventually applied. As progress (liveness) guaran-tees, we require that every operation invoked by a correct process eventually completes. Definition 1. Let E be any execution of an implementation and H be the corresponding history. Let ops (H ) denote the set of oper-ations in H
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
that were executed by correct processes in E. An asset-transfer object in message passing guarantees that each invocation issued by a correct process is followed by a matching response in H ,and that there exists ¯H , a completion of H , such that: (1) Let ¯H t denote the sub-history of successful transfers of ¯H performed by correct processes and ≺t¯H be the subset of ≺ ¯H restricted to operations in ¯H t . Then there exists a legal se-quential history S such that (a) for every correct process p, ¯H t |p = S |p and (b) ≺t¯H ⊆≺ S .(2) For every correct process p, there exists a legal sequential his-tory Sp such that: • ops ( ¯H ) ⊆ ops (Sp ), and • Sp |p = ¯H |p. Notice that property (2) implies that every update in H that af-fects the account of a correct process p is eventually included in p’s “local” history and, therefore, will reflect reads and transfer op-erations subsequently performed by p. ## 5.2 Asset Transfer Implementation in Message Passing Instead of consensus, we rely on a secure broadcast primitive that is strictly weaker than consensus and has a fully asynchronous im-plementation. It provides uniform reliable delivery despite Byzan-tine faults and so-called source order among delivered messages. The source order property, being even weaker than FIFO, guaran-tees that messages from the same source are delivered in the same order by all correct processes. More precisely, the secure broadcast > 7Rachid Guerraoui, Petr Kuznetsov, Ma eo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi primitive we use in our implementation has the following proper-ties : • Integrity: A benign process delivers a message m from a process p at most once and, if p is benign, only if p previously broadcast m. • Agreement:
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
If processes p and q are correct and p delivers m, then q delivers m. • Validity: If a correct process p broadcasts m, then p delivers m. • Source order: If p and q are benign and both deliver m from r and m′ from r , then they do so in the same order. Operation. To perform a transfer tx , a process p securely broad-casts a message with the transfer details: the arguments of the transfer operation (see Section 2.2) and some metadata. The meta-data includes a per-process sequence number of tx and references to the dependencies of tx . The dependencies are transfers incoming to p that must be known to any process before applying tx . These dependencies impose a causal relation between transfers that must be respected when transfers are being applied. For example, sup-pose that process p makes a transfer tx to process q and q, after observing tx , performs another transfer tx ′ to process r . q’s broad-cast message will contain tx ′, a local sequence number, and a refer-ence to tx . Any process (not only r ) will only evaluate the validity of tx ′ after having applied tx . This approach is similar to using vector clocks for implementing causal order among events . To ensure the authenticity of operations—so that no process is able to debit another process’s account—we assume that processes sign all their messages before broadcasting them. In practice, simi-lar to Bitcoin and other transfer systems, every process possesses a public-private key pair that allows only p to securely initiate trans-fers from its corresponding account. For simplicity of presentation, we omit this mechanism in the algorithm pseudocode. Figure 4 describes the full algorithm implementing asset-transfer in a Byzantine-prone message passing system. Each pro-cess p
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
maintains, for each process q, an integer seq [q] reflecting the number of transfers which process q initiated and which process p has validated and applied. Process p also maintains, for every process q, an integer rec [q] reflecting the number of transfers pro-cess q has initiated and process p has delivered (but not necessarily applied). Additionally, there is also a list hist [q] of transfers which involve process q. We say that a transfer operation involves a process q if that transfer is either outgoing or incoming on the account of q.Each process p maintains as well a local variable deps . This is a set of transfers incoming for p that p has applied since the last success-ful outgoing transfer. Finally, the set toValidate contains delivered transfers that are pending validation (i.e., have been delivered, but not yet validated). To perform a transfer operation, process p first checks the bal-ance of its own account, and if the balance is insufficient, returns false (line 3). Otherwise, process p broadcasts a message with this operation via the secure broadcast primitive (line 4). This message includes the three basic arguments of a transfer operation as well as seq [p]+1 and dependencies deps . Each correct process in the sys-tem eventually delivers this message via secure broadcast (line 8). Note that, given the assumption of no process executing more than > Local variables : > seq [ ] , initially seq [q]=0, ∀q{Number of validated transfers outgoing from q}rec [ ] , initially rec [q]=0, ∀q{Number of delivered transfers from q} > hist [ ] , initially hist [q]=∅,∀q{Set of validated transfers involving q}deps , initially ∅{Set of last incoming transfers for account of local process p} toValidate , initially ∅{Set of delivered (but not validated) transfers} > 1operation transfer (a,b,x)where μ(a)={p}
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
> 2if balance (a,hist [p] ∪ deps ) 3return false > 4broadcast ([( a,b,x,seq [p]+1),deps ]) > 5deps =∅ > 6operation read (a) > 7return balance (a,hist [a] ∪ deps ) > 8upon deliver (q,m) > 9let mbe [( q,d,y,s),h] > 10 if s=rec [q]+1then > 11 rec [q]=rec [q]+1 > 12 toValidate =toValidate ∪ {( q,m)} > 13 upon (q,[t,h]) ∈ toValidate ∧Valid (q,t,h) > 14 let tbe (q,d,y,s) > 15 hist [q]:=hist [q] ∪ h∪ { t} > 16 seq [q]=s > 17 if d=pthen > 18 deps =deps ∪ ( q,d,y,s) > 19 if q=pthen > 20 return true > 21 function Valid (q,t,h) > 22 let tbe (c,d,y,s) > 23 return (q=c) > 24 and (s=seq [q]+1) > 25 and (balance (c,hist [q]) ≥ y) > 26 and (∀(a,b,x,r) ∈ h:(a,b,x,r) ∈ hist [a]) > 27 function balance (a,h) > 28 return sum of incoming minus outgoing transfers for account ain h Figure 4: Consensusless transfer system based on secure broadcast. Code for every process p. one concurrent transfer, every process waits for delivery of its own message before initiating another broadcast. This effectively turns the source order property of secure broadcast into FIFO order. Upon delivery, process p checks this message for well-formedness (lines 9 and 10), and then adds it to the set of messages pending validation. We explain the validation procedure later. Once a transfer passes validation (the predicate in line 13 is sat-isfied), process p applies this transfer on the local state. Applying a transfer means that process p adds this transfer and its dependen-cies to the history of the outgoing (line 15) account. If the transfer is incoming for local process p, it is also added to deps , the set of current dependencies for p (line 18). If the transfer
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
is outgoing for p, i.e., it is the currently pending transfer operation invoked by p,then the response true is returned (line 20). > 8The Consensus Number of a Cryptocurrency To perform a read (a) operation for account a, process p simply computes the balance of this account based on the local history hist [a] (line 28). Before applying a transfer op from some process q, process p val-idates op via the Valid function (lines 21–26). To be valid, op must satisfy four conditions. The first condition is that process q (the is-suer of transfer op ) must be the owner of the outgoing account for op (line 23). Second, any preceding transfers that process q issued must have been validated (line 24). Third, the balance of account q must not drop below zero (line 25). Finally, the reported dependen-cies of op (encoded in h of line 26) must have been validated and exist in hist [q]. Lemma 3. In any infinite execution of the algorithm (Figure 4), every operation performed by a correct process eventually completes. Proof. A transfer operation that fails or a read operation in-voked by a correct process returns immediately (lines 3 and 7, re-spectively). Consider a transfer operation T invoked by a correct process p that succeeds (i.e., passes the check in line 2), so p broadcasts a message with the transfer details using secure broadcast (line 4). By the validity property of secure broadcast, p eventually delivers the message (via the secure broadcast callback, line 8) and adds it to the toValidate set. By the algorithm, this message includes a set deps of operations (called h, line 9) that involve p’s account. This set includes transfers that process p delivered and validated after issuing the prior successful outgoing transfer (or since sys-tem initialization if there
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
is no such transfer) but before issuing T (lines 4 and 5). As process p is correct, it operates on its own account, respects the sequence numbers, and issues a transfer only if it has enough balance on the account. Thus, when it is delivered by p, T must satisfy the first three conditions of the Valid predicate (lines 23–25). Moreover, by the algorithm, all dependencies (labeled h in function Valid ) included in T are in the history hist [p] and, thus the fourth validation condition (line 26) also holds. Thus, p eventually validates T and completes the operation by returning true in line 20. Theorem 3. The algorithm in Figure 4 implements an asset-transfer object type. Proof. Fix an execution E of the algorithm, let H be the corre-sponding history. Let V denote the set of all messages that were delivered (line 8) and validated (line 23) at correct processes in E. Every message m = [( q, d, y, s), h] ∈ V is put in hist [q] (line 15). We define an order ⊆ V × V as follows. For m = [( q, d,y, s), h] ∈ V and m′ = [( r, d ′, y′, s ′), h′] ∈ V , we have m m′ if and only if one of the following conditions holds: • q = r and s < s′, • ( r, d ′, y′, s ′) ∈ h, or • there exists m′′ ∈ V such that m m′′ and m′′ m′.By the source order property of secure broadcast (see Sec-tion 5.2), correct processes p and r deliver messages from any pro-cess q in the same order. By the algorithm in Figure 4, a message from q with a sequence number i is added by
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
a correct process to toValidate set only if the previous message from q added to toValidate had sequence number i − 1 (line 10). Furthermore, a mes-sage m = [( q, d, y, s), h] is validated at a correct process only if all messages in h have been previously validated (line 26). Therefore, is acyclic and thus can be extended to a total order. Let S be the sequential history constructed from any such total order on messages in V in which every message m = [( q,d, y, s), h] is replaced with the invocation-response pair transfer (q, d, y); true .By construction, every operation transfer (q, d, y) in S is pre-ceded by a sequence of transfers that ensure that the balance of q does not drop below y (line 25). In particular, S includes all out-going transfers from the account of q performed previously by q itself. Additionally S may order some incoming transfer to q that did not appear at hist [q] before the corresponding (q, d,y, s) has been added to it. But these “unaccounted” operations may only in-crease the balance of q and, thus, it is indeed legal to return true .By construction, for each correct process p, S respects the order of successful transfers issued by p. Thus, the subsequence of suc-cessful transfers in H “looks” linearizable to the correct processes: H , restricted to successful transfers witnessed by the correct pro-cesses, is consistent with a legal sequential history S.Let p be a correct process in E. Now let Vp denote the set of all messages that were delivered (line 8) and validated (line 23) at p in E. Let p be the subset of restricted to the elements in Vp .Obviously, p is cycle-free and we can again extend
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
it to a total order. Let Sp be the sequential history build in the same way as S above. Similarly, we can see that Sp is legal and, by construction, consistent with the local history of all operations of p (including reads and failed transfers). By Lemma 3, every operation invoked by a correct process even-tually completes. Thus, E indeed satisfies the properties of an asset-transfer object type. ## 6 k-SHARED ASSET TRANSFER IN MESSAGE PASSING Our message-passing asset-transfer implementation can be nat-urally extended to the k-shared case, when some accounts are owned by up to k processes. As we showed in Section 4, a purely asynchronous implementation of a k-shared asset-transfer does not exist, even in the benign shared-memory environment. k-shared BFT service. To circumvent this impossibility, we as-sume that every account is associated with a Byzantine fault-tolerant state-machine replication service (BFT ) that is used by the account’s owners to order their outgoing transfers. More precisely, the transfers issued by the owners are assigned mono-tonically increasing sequence numbers .The service can be implemented by the owners themselves, acting both as clients , submitting requests, and replicas , reaching agreement on the order in which the requests must be served. As long as more than two thirds of the owners are correct, the service is safe , in particular, no sequence number is assigned to more than one transfer. Moreover, under the condition that the owners can eventually communicate within a bounded message delay, every request submitted by a correct owner is guaranteed to be even-tually assigned a sequence number . One can argue that it is much more likely that this assumption of eventual synchrony holds > 9Rachid Guerraoui, Petr Kuznetsov, Ma eo Monti, Matej Pavlovič, and Dragos-Adrian Seredinschi for a bounded set of owners, rather than
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
for the whole set of sys-tem participants. Furthermore, communication complexity of such an implementation is polynomial in k and not in N , the number of processes. Account order in secure broadcast. Consider even the case where the threshold of one third of Byzantine owners is exceeded, where the account may become blocked or, even worse, compromised. In this case, different owners may be able to issue two different trans-fers associated with the same sequence number. This issue can be mitigated by a slight modification of the clas-sical secure broadcast algorithm . In addition to the properties of Integrity, Validity and Agreement of secure broadcast, the mod-ified algorithm can implement the property of account order , gen-eralizing the source order property (Section 5.2). Assume that each broadcast message is equipped with a sequence number (generated by the BFT service, as we will see below). • Account order: If a benign process p delivers messages m (with sequence number s) and m′ (with sequence number s ′) such that m and m′ are associated with the same account and s < s ′, then p delivers m before m′.Informally, the implementation works as follows. The sender sends the message (containing the account reference and the se-quence number) it wants to broadcast to all and waits until it re-ceives acknowledgements from a quorum of more than two thirds of the processes. A message with a sequence number s associated with an account a is only acknowledged by a benign process if the last message associated with a it delivered had sequence num-ber s − 1. Once a quorum is collected, the sender sends the mes-sage equipped with the signed quorum to all and delivers the mes-sage. This way, the benign processes deliver the messages associ-ated with the same account in the same
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
order. If the owners of an account send conflicting messages for the same sequence num-ber, the account may block. However, and most importantly, even a compromised account is always prevented from double spend-ing. Liveness of operations on a compromised account is not guar-anteed, but safety and liveness of other operations remains unaf-fected. Putting it all together. The resulting k-shared asset transfer sys-tem is a composition of a collection of BFT services (one per account), the modified secure broadcast protocol (providing the account-order property), and a slightly modified protocol in Fig-ure 4. To issue a transfer operation t on an account a it owns, a process p first submits t to the associated BFT service to get a sequence number. Assuming that the account is not compromised and the service is consistent, the transfer receives a unique sequence num-ber s. Note that the decided tuple (a, t, s) should be signed by a quorum of owners: this will be used by the other processes in the system to ensure that the sequence number has been indeed agreed upon by the owners of a. The process executes the protocol in Fig-ure 4, with the only modification that the sequence number seq is now not computed locally but adopted from the BFT service. Intuitively, as the transfers associated with a given account are processed by the benign processes in the same order, the resulting protocol ensures that the history of successful transfers is lineariz-able. On the liveness side, the protocol ensures that every transfer on a non-compromised account is guaranteed to complete. ## 7 RELATED WORK Many systems address the problem of asset transfers, be they for a permissioned (private, with a trusted external access control mech-anism) [4, 26, 31] or permissionless (public, prone to Sybil attacks) setting [2, 16, 22, 33, 38,
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
44]. Decentralized systems for the pub-lic setting are open to the world. To prevent malicious parties from overtaking the system, these systems rely on Sybil-proof tech-niques, e.g., proof-of-work , or proof-of-stake . The above-mentioned solutions, whether for the permissionless or the permis-sioned environment, seek to solve consensus. They must inevitably rely on synchrony assumptions or randomization. By sidestepping consensus, we can provide a deterministic and asynchronous im-plementation. It is worth noting that many of those solutions allow for more than just transfers, and support richer operations on the system state—so-called smart contracts. Our paper focuses on the original asset transfer problem, as defined by Nakamoto , and we do not address smart contracts, for certain forms of which consensus is in-deed necessary. However, our approach allows for arbitrary opera-tions, if those operations affect groups of the participants that can solve consensus among themselves. Potential safety or liveness vi-olations of those operations (in case this group gets compromised) are confined to the group and do not affect the rest of the system. In the blockchain ecosystem, a lot of work has been devoted to avoid a totally ordered chain of transfers. The idea is to replace the totally ordered linear structure of a blockchain with that of a directed acyclic graph (DAG) for structuring the transfers in the system. Notable systems in this spirit include Byteball , Veg-visir , Corda , Nano , or the GHOST protocol . Even if these systems use a DAG to replace the classic blockchain, they still employ consensus. We can also use a DAG to characterize the relation between transfers, but we do not resort to solving consensus to build the DAG, nor do we use the DAG to solve consensus. More precisely, we can regard each account as having an individual history. Each such
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
history is managed by the corresponding account owner with-out depending on a global view of the system. Histories are loosely coupled through a causality relation established by dependencies among transfers. The important insight that an asynchronous broadcast-style ab-straction suffices for transfers appears in the literature as early as 2002, due to Pedone and Schiper . Duan et. al. intro-duce efficient Byzantine fault-tolerant protocols for storage and also build on this insight. So does recent work by Gupta on financial transfers which seems closest to ours; the proposed algo-rithm is based on similar principles as some implementations of secure broadcast [35, 36]. To the best of our knowledge, however, we are the first to formally define the asset transfer problem as a shared object type, study its consensus number, and propose algo-rithms building on top of standard abstractions that are amenable to a real deployment in cryptocurrencies. > 10 The Consensus Number of a Cryptocurrency ## REFERENCES Abraham, I., Gueta, G., Malkhi, D., Alvisi, L., Kotla, R., and Martin, J.-P. Revisiting fast practical byzantine fault tolerance, 2017. Abraham, I., Malkhi, D., Nayak, K., Ren, L., and Spiegelman, A. Solida: A blockchain protocol based on reconfigurable byzantine consensus, 2016. Afek, Y., Attiya, H., Dolev, D., Gafni, E., Merritt, M., and Shavit, N. Atomic snapshots of shared memory. JACM 40 , 4 (1993), 873–890. Androulaki, E., Barger, A., Bortnikov, V., Cachin, C., Christidis, K., De Caro, A., Enyeart, D., Ferris, C., Laventman, G., Manevich, Y., Muralid-haran, S., Murthy, C., Nguyen, B., Sethi, M., Singh, G., Smith, K., Sorniotti, A., Stathakopoulou, C., Vukoli¢, M., Cocco, S. W., and Yellick, J. Hyper-ledger fabric: A distributed operating system for permissioned blockchains. In Proceedings of the Thirteenth EuroSys Conference (New York, NY, USA, 2018), Eu-roSys ’18, ACM, pp. 30:1–30:15. Antoniadis,
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
K., Guerraoui, R., Malkhi, D., and Seredinschi, D.-A. State Ma-chine Replication Is More Expensive Than Consensus. In 32nd International Symposium on Distributed Computing (DISC 2018) (Dagstuhl, Germany, 2018), U. Schmid and J. Widder, Eds., vol. 121 of Leibniz International Proceedings in In-formatics (LIPIcs) , Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, pp. 7:1– 7:18. Attiya, H., and Welch, J. L. Sequential consistency versus linearizability. ACM TOCS 12 , 2 (1994), 91–122. Bentov, I., Gabizon, A., and Mizrahi, A. Cryptocurrencies without proof of work. In Financial Cryptography and Data Security (Berlin, Heidelberg, 2016), J. Clark, S. Meiklejohn, P. Y. Ryan, D. Wallach, M. Brenner, and K. Rohloff, Eds., Springer Berlin Heidelberg, pp. 142–157. Berman, P., Garay, J. A., and Perry, K. J. Towards optimal distributed con-sensus. In 30th Annual Symposium on Foundations of Computer Science (FOCS) (Research Triangle Park, NC, USA, Oct 1989), IEEE, pp. 410–415. Bonneau, J., Miller, A., Clark, J., Narayanan, A., Kroll, J. A., and Felten, E. W. SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurren-cies, 2015. Bracha, G., and Toueg, S. Asynchronous Consensus and Broadcast Protocols. JACM 32 , 4 (1985), 824–840. Cachin, C., Kursawe, K., Petzold, F., and Shoup, V. Secure and efficient asyn-chronous broadcast protocols. In Advances in Cryptology — CRYPTO 2001 (Berlin, Heidelberg, 2001), J. Kilian, Ed., Springer Berlin Heidelberg, pp. 524–541. Cachin, C., and Vukoli¢, M. Blockchains consensus protocols in the wild, 2017. Castro, M., and Liskov, B. Practical byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. 20 , 4 (Nov. 2002), 398–461. Churyumov, A. Byteball: A decentralized system for storage and transfer of value, 2016. Clement, A., Wong, E. L., Alvisi, L., Dahlin, M., and Marchetti, M. Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults. In NSDI (Berkeley, CA USA, 2009), USENIX
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
Association, pp. 153–168. Decker, C., Seidel, J., and Wattenhofer, R. Bitcoin meets strong consistency. In Proceedings of the 17th International Conference on Distributed Computing and Networking (New York, NY, USA, 2016), ICDCN ’16, ACM, pp. 13:1–13:10. Duan, S., Reiter, M. K., and Zhang, H. Beat: Asynchronous bft made practical. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communi-cations Security (New York, NY, USA, 2018), CCS ’18, ACM, pp. 2028–2041. Eyal, I., Gencer, A. E., Sirer, E. G., and Van Renesse, R. Bitcoin-NG: A Scalable Blockchain Protocol, 2016. Fischer, M. J., Lynch, N. A., and Paterson, M. S. Impossibility of distributed consensus with one faulty process. JACM 32 , 2 (Apr. 1985), 374–382. Garay, J., Kiayias, A., and Leonardos, N. The bitcoin backbone protocol: Anal-ysis and applications. In Advances in Cryptology - EUROCRYPT 2015 (Berlin, Heidelberg, 2015), E. Oswald and M. Fischlin, Eds., Springer Berlin Heidelberg, pp. 281–310. Garay, J. A., Katz, J., Kumaresan, R., and Zhou, H.-S. Adaptively secure broad-cast, revisited. In Proceedings of the 30th Annual ACM SIGACT-SIGOPS Sympo-sium on Principles of Distributed Computing (New York, NY, USA, 2011), PODC ’11, ACM, pp. 179–186. Gilad, Y., Hemo, R., Micali, S., Vlachos, G., and Zeldovich, N. Algorand: Scaling byzantine agreements for cryptocurrencies. In Proceedings of the 26th Symposium on Operating Systems Principles (New York, NY, USA, 2017), SOSP ’17, ACM, pp. 51–68. Guerraoui, R., Pavlovic, M., and Seredinschi, D.-A. Blockchain protocols: The adversary is in the details. Symposium on Foundations and Applications of Blockchain, 2018. Gupta, S. A Non-Consensus Based Decentralized Financial Transaction Process-ing Model with Support for Efficient Auditing. Master’s thesis, Arizona State University, USA, 2016. Hadzilacos, V., and Toueg, S. Fault-tolerant broadcasts and related problems. In Distributed Systems , S. J. Mullender, Ed. Addison-Wesley, .,
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
1993, ch. 5, pp. 97– 145. Hearn, M. Corda: A distributed ledger. Corda Technical White Paper, 2016. Herlihy, M. Wait-free synchronization. TOPLAS 13 , 1 (1991), 123–149. Herlihy, M. P., and Wing, J. M. Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12 , 3 (July 1990), 463–492. J. Fidge, C. Timestamps in message-passing systems that preserve partial or-dering. Proceedings of the 11th Australian Computer Science Conference 10 , 1 (02 1988), 56–66. Jayanti, P., and Toueg, S. Some results on the impossibility, universality, and decidability of consensus. In Distributed Algorithms (Berlin, Heidelberg, 1992), A. Segall and S. Zaks, Eds., Springer Berlin Heidelberg, pp. 69–84. Karlsson, K., Jiang, W., Wicker, S., Adams, D., Ma, E., van Renesse, R., and Weatherspoon, H. Vegvisir: A Partition-Tolerant Blockchain for the Internet-of-Things, 2018. Kogias, E. K., Jovanovic, P., Gailly, N., Khoffi, I., Gasser, L., and Ford, B. En-hancing bitcoin security and performance with strong consistency via collective signing. USENIX Security, 2016. Kokoris-Kogias, E., Jovanovic, P., Gasser, L., Gailly, N., Syta, E., and Ford, B. Omniledger: A secure, scale-out, decentralized ledger via sharding. IEEE S&P, 2018. LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [On-line resource]: org/en/whitepaper (accessed 18.01. 2019), 2018. Malkhi, D., Merritt, M., and Rodeh, O. Secure Reliable Multicast Protocols in a WAN. ICDCS, 1997. Malkhi, D., and Reiter, M. K. A high-throughput secure reliable multicast protocol. Journal of Computer Security 5 , 2 (1997), 113–128. Mazieres, D. The stellar consensus protocol: A federated model for internet-level consensus. Stellar Development Foundation, 2015. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system, 2008. Pedone, F., and Schiper, A. Handling message semantics with generic broad-cast protocols. Distributed Computing 15 , 2 (2002), 97–107. Rapoport, P.,
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
Leal, R., Griffin, P., and Sculley, W. The Ripple Protocol, 2014. Sompolinsky, Y., and Zohar, A. Accelerating Bitcoin’s transaction process-ing: fast money grows on trees, not chains. IACR Cryptology ePrint Archive, 2013:881, 2013. Sousa, J., Bessani, A., and Vukolic, M. A byzantine fault-tolerant ordering service for the hyperledger fabric blockchain platform. IEEE DSN, 2018. Szabo, N. Formalizing and securing relationships on public networks. First Monday 2(9), 1997. Team-Rocket . Snowflake to Avalanche: A Novel Metastable Consensus Pro-tocol Family for Cryptocurrencies. White Paper, 2018. Revision: 05/16/2018 21:51:26 UTC. Toueg, S. Randomized byzantine agreements. In Proceedings of the Third Annual ACM Symposium on Principles of Distributed Computing (New York, NY, USA, 1984), PODC ’84, ACM, pp. 163–178. Vukoli¢, M. The Quest for Scalable Blockchain Fabric: Proof-of-work vs. BFT Replication. International Workshop on Open Problems in Network Security, 2015. Wood, G. Ethereum: A secure decentralized generalized transaction ledger. White paper, 2015. 11
|
{
"page_id": null,
"source": 7322,
"title": "from dpo"
}
|
Title: 06.Consensus copy URL Source: Markdown Content: # Consensus and # Reliable Broadcast # Broadcast If a process sends a message , then every process eventually delivers m m # Broadcast If a process sends a message , then every process eventually delivers p0 p1 p2 p3 m m # Broadcast If a process sends a message , then every process eventually delivers How can we adapt the spec for an environment where processes can fail? And what does “fail” mean? p0 p1 p2 p3 m mA hierarchy of # failure models Crash Arbitrary failures with message authentication Arbitrary (Byzantine) failures Send Omission General Omission Receive Omission > benign failures Fail-stop # Reliable Broadcast Validity If the sender is correct and broadcasts a message , then all correct processes eventually deliver Agreement If a correct process delivers a message , then all correct processes eventually deliver Integrity Every correct process delivers at most one message, and if it delivers , then some process must have broadcast m m m m m m # Terminating # Reliable Broadcast Validity If the sender is correct and broadcasts a message , then all correct processes eventually deliver Agreement If a correct process delivers a message , then all correct processes eventually deliver Integrity Every correct process delivers at most one message, and if it delivers ≠ SF , then some process must have broadcast Termination Every correct process eventually delivers some message m m m m m m # Consensus Validity If all processes that propose a value propose , then all correct processes eventually decide Agreement If a correct process decides , then all correct processes eventually decide Integrity Every correct process decides at most one value, and if it decides , then some process must have proposed Termination Every correct
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
process eventually decides some value v v v v v vProperties of # send(m) and receive(m) Benign failures: Validity If sends to , and , , and the link between them are correct, then eventually receives Uniform* Integrity For any message , receives at most once from , and only if sent to * A property is uniform if it applies to both correct and faulty processes m m m m m p p q qq q q pp # Properties of # send( ) and receive( ) Arbitrary failures: Integrity For any message , if and are correct then receives at most once from , and only if sent to m qp q q m mpp # mm # Questions, Questions… Are these problems solvable at all? Can they be solved independent of the failure model? Does solvability depend on the ratio between faulty and correct processes? Does solvability depend on assumptions about the reliability of the network? Are the problems solvable in both synchronous and asynchronous systems? If a solution exists, how expensive is it? # Plan Synchronous Systems Consensus for synchronous systems with crash failures Lower bound on the number of rounds Reliable Broadcast for arbitrary failures with message authentication Lower bound on the ratio of faulty processes for Consensus with arbitrary failures Reliable Broadcast for arbitrary failures Asynchronous Systems Impossibility of Consensus for crash failures Failure detectors PAXOS Model Synchronous Message Passing Execution is a sequence of rounds In each round every process takes a step sends messages to neighbors receives messages sent in that round changes its state Network is fully connected (an -clique) No communication failures n # A simple # Consensus algorithm Initially To execute propose ( ) 1: send { } to all decide( ) occurs as follows: 2: for all do
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
3: receive from 4: := 5: decide min( ) Process :pi V = {vi} pj vi vi x j, 0 ≤ j ≤ n−1, j ̸ = i Sj V ∪ SjV V # An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 # An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 v1 v4 Suppose at the end of round 1 Can decide? v1 = v3 = v4 p3 # An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 v1 v4 Suppose at the end of round 1 Can decide? v1 = v3 = v4 p3 # An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 v1 v4 v2 Suppose at the end of round 1 Can decide? v1 = v3 = v4 p3 # An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 v1 v4 v2 Suppose at the end of round 1 Can decide? v1 = v3 = v4 p3An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 v1 v4 v2 v1 v1 Suppose at the end of round 1 Can decide? v1 = v3 = v4 p3 # An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 v1 v4 v2 v1 v1 v4 Suppose at the end of round 1 Can decide? v1 = v3 = v4 p3 # An execution p1 p2 p3 p4 p1 p2 p3 p4 v1 v2 v3 v4 v1 v4 v2 v1 v1 v4 v3v3 Suppose at the end of round 1 Can decide? v1 = v3 = v4 p3 # Echoing values
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
A process that receives a proposal in round 1, relays it to others during round 2. Echoing values A process that receives a proposal in round 1, relays it to others during round 2. Suppose hasn’t heard from at the end of round 2. Can decide? p3 p2 p3 # Echoing values A process that receives a proposal in round 1, relays it to others during round 2. Suppose hasn’t heard from at the end of round 2. Can decide? p3 p2 p3 p1 p2 p3 p4 p1 p2 p3 p4 p1 p2 p3 p4 round 1 round 2 # What is going on A correct process has not received all proposals by the end of round . Can decide? Another process may have received the missing proposal at the end of round and be ready to relay it in round p∗ p∗ i i + 1 i # Dangerous Chains Dangerous chain The last process in the chain is correct, all others are faulty > round 1 > round 2 > rounds > round p∗ p∗ p∗ p∗ p0 p1 p2 pi−1 pi > 3...i −1 > i # Living dangerously How many rounds can a dangerous chain span? faulty processes at most nodes in the chain spans at most rounds It is safe to decide by the end of round ! f f +1 f f +1 # The Algorithm Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide( ) occurs as follows: 5: if then 6: decide min( ) Code for process :pi k = f +1 j, 0 ≤ j ≤ n−1, j ̸ = i k, 1 ≤ k ≤ f +1 V = {vi} v ∈ V
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
: pi v V V ∪ Sj Sj pj vi x V # Termination and # Integrity Termination Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v # Termination and # Integrity Termination Every correct process reaches round f + 1 Decides on min(V) --- which is well defined Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi vTermination and # Integrity Termination Every correct process reaches round f + 1 Decides on min(V) --- which is well defined Integrity At most one value: Only if it was proposed: Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v # Termination and # Integrity Termination Every correct process reaches round
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
f + 1 Decides on min(V) --- which is well defined Integrity At most one value: – one decide, and min(V) is unique Only if it was proposed: Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v # Termination and # Integrity Termination Every correct process reaches round Decides on min(V) --- which is well defined Integrity At most one value: – one decide, and min(V) is unique Only if it was proposed: – To be decided upon, must be in V at round – if value = v i, then it is proposed in round 1 – else, suppose received in round k. By induction: – : • by Uniform Integrity of underlying send and receive, it must have been sent in round 1 • by the protocol and because only crash failures, it must have been proposed – Induction Hypothesis: all values received up to round k = j have been proposed – : • sent in round j+1 (Uniform Integrity of send and synchronous model) • must have been part of V of sender at end of round j • by protocol, must have been received by sender by end of round j • by induction hypothesis, must have been proposed Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v f +1 f +1 k = j +1 k = 1 # Validity Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi vValidity Suppose every process proposes Since only crash model, only can be sent By Uniform Integrity of send and receive, only can be received By protocol, = { } min( ) = decide( ) v∗ v∗ v∗ v∗ v∗ v∗ Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v V V # Agreement Lemma 1 For any , if a process receives a value in round , then there exists a sequence of processes such that , is > . ’s proponent, and in each round sends and receives it. Furthermore, all processes in the sequence are distinct. Proof By induction on the length of the sequence Initially To execute
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v r ≥ 1 p v r p0, p 1, . . . , p r pr = p p0 v pk−1 pkv # Agreement Lemma 2: In every execution, at the end of round , for every correct processes and Agreement follows from Lemma 2, since min is a deterministic function Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v f +1 Vi = Vj pi pj # Agreement Lemma 2: In every execution, at the end of round , for every correct processes and Proof: • Show that if a correct has in its at the end of round , then every correct has in its at the end of round Agreement follows from Lemma 2, since min is a deterministic function Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v p x V f +1 p x V f +1 f +1 Vi = Vj pi pjAgreement Lemma 2: In every execution, at the end of round , for every correct processes and Proof: • Show that if a correct has in its at the end of round , then every correct has in its at the end of round • Let be earliest round is added to the of a correct . Let that process be • If , then sends in round ; every correct process receives and adds to its in round Agreement follows from Lemma 2, since min is a deterministic function Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v p x V f +1 x V f +1 r x V p p∗ r ≤ f p∗ x r+1 ≤ f +1 x x V r+1 f +1 Vi = Vj pi pj p # Agreement Lemma 2: In every execution, at the end of round , for every correct processes and Proof: • Show that if a correct has in its at the end of round , then every correct has in its at the end of round • Let be earliest round is added to the of a
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
correct . Let that process be • If , then sends in round ; every correct process receives and adds to its in round • What if ? Agreement follows from Lemma 2, since min is a deterministic function Initially To execute propose( ) round 1: send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v p x V f +1 p x V f +1 r r = f +1 x V p p∗ r ≤ f p∗ x r+1 ≤ f +1 x x V r+1 f +1 Vi = Vj pi pj # Agreement Lemma 2: In every execution, at the end of round , for every correct processes and Proof: • Show that if a correct has in its at the end of round , then every correct has in its at the end of round • Let be earliest round is added to the of a correct . Let that process be • If , then sends in round ; every correct process receives and adds to its in round • What if ? • By Lemma 1, there exists a sequence of distinct processes • Consider processes • processes; only faulty • one of is correct, and adds to its before does it in round CONTRADICTION! Agreement follows from Lemma 2, since min is a deterministic function p0, . . . , p f p0, . . . , p f Initially To execute propose( ) round 1:
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
send { has not already sent } to all 2: for all do 3: receive from 4: := decide(x) occurs as follows: 5: if then 6: decide min( ) V = {vi} vi Sj pj V ∪ SjV k = f +1 V k, 1 ≤ k ≤ f +1 j, 0 ≤ j ≤ n−1, j ̸ = i v ∈ V : pi v p x V f +1 p x V f +1 r r = f +1 x V p p∗ r ≤ f p∗ x r+1 ≤ f +1 x x V r+1 p0, . . . , p f +1 = p∗ f +1 f p∗ r x V f +1 Vi = Vj pi pj # Terminating # Reliable Broadcast Validity If the sender is correct and broadcasts a message , then all correct processes eventually deliver Agreement If a correct process delivers a message , then all correct processes eventually deliver Integrity Every correct process delivers at most one message, and if it delivers ≠ SF , then some process must have broadcast Termination Every correct process eventually delivers some message m m m m m mTRB for benign failures Sender in round 1: 1: send m to all Process p in round k, 1 ≤ k ≤ f+1 1: if delivered m in round k-1 then 2: if p ≠ sender then 3: send m to all 4: halt 5: receive round k messages 6: if received m then 7: deliver(m) 8: if k = f+1 then halt 9: else if k = f+1 10: deliver (SF) 11: halt # TRB for benign failures Sender in round 1: 1: send m to all Process p in round k, 1 ≤ k ≤ f+1 1: if delivered m in round k-1 then
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
2: if p ≠ sender then 3: send m to all 4: halt 5: receive round k messages 6: if received m then 7: deliver(m) 8: if k = f+1 then halt 9: else if k = f+1 10: deliver (SF) 11: halt Terminates in rounds How can we do better? Find a protocol whose round complexity is proportional to –the number of failures that actually occurred– rather than to .. –the max number of failures that may occur f f +1 t # Early stopping: # the idea Suppose processes can detect the set of processes that have failed by the end of round Call that set If there can be no active dangerous chains, and can safely deliver SF faulty (p, i ) |faulty (p, i )| < i p i # Early Stopping: # The Protocol set of processes that failed to send a message to in some round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt |faulty (p, k )| < k k, 1 ≤ k ≤ f +1 p p k p v k = f +1 k = f +1 v k−1 m pq q k faulty (p, k ) faulty (p, k ) := faulty (p, k − 1) ∪ ⌘ pTermination Let be the set of processes that have failed to send a message to in any round 1: if = sender
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p # Termination If in any round a process receives a value, then it delivers the value in that round If a process has received only “?” for rounds, then it delivers SF in round f +1 f +1 Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p # Validity Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p # Validity If the sender is correct then it sends to all in round 1 By Validity of the underlying send and receive, every correct process will receive by the end of round 1 By the protocol, every correct process will deliver by the end of round 1 m m m Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10:
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) pAgreement - 1 Lemma 1 : For any , if a process delivers ≠ SF in round r, then there exists a sequence of processes such that = sender, , and in each round , sent and received it. Furthermore, all processes in the sequence are distinct, unless and sender Lemma 2: For any , if a process sets value to SF in round , then there exist some and a sequence of distinct processes such that only receives “?” in rounds 1 to , , and in each round , sends SF to and receives SF p0, p 1, . . . , p r p0 pr = p pk−1 pk p0 = p1 = m m qj , q j+1 , . . . , q r = p qj qk qk qk−1 |faulty (qj , j )| < j k, j +1 ≤ k ≤ r j ≤ r k, 1 ≤ k ≤ r r ≥ 1 p r = 1 r ≥ 1 p r j Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p # Agreement - 2 Lemma 3: It is impossible for and , not necessarily correct or distinct, to set value in the same round to and SF, respectively qp mr Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p # Agreement - 2 Proof By contradiction Suppose sets value = and sets value = SF By Lemmas 1 and 2 there exist with the appropriate characteristics Since did not
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
receive from process in round must conclude that are all faulty processes But then, CONTRADICTION p0, . . . , p r qj , . . . , q r |faulty (qj , j )| ≥ j p0, . . . , p j−1 pk−1 qj qj mp q Lemma 3: It is impossible for and , not necessarily correct or distinct, to set value in the same round to and SF, respectively qp m m 1 ≤ k ≤ j k r Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p # Agreement - 3 Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) pAgreement - 3 Let be the earliest round in which a correct process delivers value ≠ SF By Lemma 3, no (correct) process can set value differently in round In round , that correct process sends its value to all Every correct process receives and delivers the value in round By Lemma 1, there exists a sequence of distinct processes Consider processes processes; only faulty one of is correct-- let it be To send v in round must have set its value to v and delivered v in round CONTRADICTION Proof If no correct process ever receives m, then every correct process delivers SF in round Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p r+1 f +1 r+1 f +1 r f r = f +1 p0, ..., p f +1 p0, ..., p f p0, ..., p f pc c+1 , p c c < r f +1 = pr f +1 f r r # Integrity Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| < k k = f +1 v v k q p q k p faulty (p, k ) := faulty (p, k − 1) ∪ faulty (p, k ) p # Integrity At most one Failures are benign, and a process executes at most one deliver event before halting If ≠ SF, only if was broadcast From Lemma 1 in the proof of Agreement m mm Let be the set of processes that have failed to send a message to in any round 1: if = sender then value := else
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
value:= ? Process in round 2: send value to all 3: if delivered in round then halt 4: receive round values from all 5: { | received no value from in round } 6: if received value ≠ ? then 7: value := 8: deliver value 9: if = sender then value := ? 10: else if or then 11: value := SF 12: deliver value 13: if then halt k−1 1, . . . , k p m p k, 1 ≤ k ≤ f +1 k = f +1 |faulty (p, k )| from .from . α|p3 α|pi pipi pi α α > p1p4 # Similarity Definition Let and be two executions of consensus and let be a correct process in both and . is similar
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
to with respect to , denoted if α1 α2 pi α1 α2 α1 α2 pi α1 ∼pi α2 α1|pi = α2|pi # Similarity Definition Let and be two executions of consensus and let be a correct process in both and . is similar to with respect to , denoted if α1 α2 pi α1 α2 α1 α2 pi α1 ∼pi α2 α1|pi = α2|pi Note If then decides the same value in both executions α1 ∼pi α2 piSimilarity Definition Let and be two executions of consensus and let be a correct process in both and . is similar to with respect to , denoted if α1 α2 pi α1 α2 α1 α2 pi α1 ∼pi α2 α1|pi = α2|pi Note If then decides the same value in both executions α1 ∼pi α2 pi Lemma If and is correct, then dec( ) = dec( ) α1 ∼pi α2 pi α1 α2 # Similarity Definition Let and be two executions of consensus and let be a correct process in both and . is similar to with respect to , denoted if α1 α2 pi α1 α2 α1 α2 pi α1 ∼pi α2 α1|pi = α2|pi Note If then decides the same value in both executions α1 ∼pi α2 pi Lemma If and is correct, then dec( ) = dec( ) α1 ∼pi α2 pi The transitive closure of is denoted . We say that if there exist executions such that α1 ∼pi α2 α1 ≈ α2 α1 ≈ α2 β1, β2, . . . , βk+1 α1 = β1 ∼pi1 β2 ∼pi2 . . . , ∼pik βk+1 = α2 α1 α2 # Similarity Definition Let and be two executions of consensus and let be a correct process in both and . is similar to with respect to , denoted if α1
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
α2 pi α1 α2 α1 α2 pi α1 ∼pi α2 α1|pi = α2|pi Note If then decides the same value in both executions α1 ∼pi α2 pi Lemma If and is correct, then dec( ) = dec( ) α1 ∼pi α2 pi The transitive closure of is denoted . We say that if there exist executions such that α1 ∼pi α2 α1 ≈ α2 α1 ≈ α2 β1, β2, . . . , βk+1 α1 = β1 ∼pi1 β2 ∼pi2 . . . , ∼pik βk+1 = α2 Lemma If then dec( ) = dec( ) α1 ≈ α2 α1 α2 α1 α2 # Single-Failure Case There is no algorithm that solves consensus in fewer than two rounds in the presence of one crash failure, if n ≥ 3The Idea By contradiction Consider a one-round execution in which each process proposes 0. What is the decision value? Consider another one-round execution in which each process proposes 1. What is the decision value? Show that there is a chain of similar executions that relate the two executions. So what? ## s Definition is the execution of the algorithm in which no failures occur only processes propose 1 αi p0, . . . , p i−1 > 1 αn > 1 > 1 > 1 > 1 > p0 > pi−1 > pi+1 > pi > pn−1 α0 > 0 > 0 > 0 > 0 > 0 > p0 > pi−1 > pi+1 > pi > pn−1 αi+1 > p0 > pi−1 > pi+1 > pi > pn−1 > 1 > 1 > 0 > 0 > 1 > p0 > pi−1 > pi+1 > pi > pn−1 > 1 > 0 > 0 > 0 > 1 αi # αi # Adjacent s are similar! Starting from , we build
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
a set of executions where as follows: is obtained from after removing the messages that sends to the -th highest numbered processors (excluding itself) # αi αi αi j 0 ≤ j ≤ n−1 αi j αi pi j # The executions > p0 > pi−1 > pi+1 > pi > pn−1 > 1 > 0 > 0 > 0 > 1 αi > 0 > p0 > pi−1 > pi+1 > pi > pn−1 > 1 > 0 > 0 > 0 > 1 αi > 1 … > p0 > pi−1 > pi+1 > pi > pn−1 > 1 > 0 > 0 > 0 > 1 αi > n−1 # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi 0 # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi 1 ≈ # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi 2 ≈ # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi n−1 > ≈ # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi n−1 p0 pi−1 pi+1 pi pn−1 1 1 0 0 1 βi n−1≈ > ≈ # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi n−1 p0 pi−1 pi+1 pi pn−1 1 1 0 0 1 ≈ > ≈ βi n−2 # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi n−1 p0 pi−1 pi+1 pi pn−1 1 1 0 0 1 ≈ > ≈ βi n−3 # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi αi n−1 p0 pi−1 pi+1 pi pn−1 1 1 0 0 1 ≈ > ≈ βi 0Indistinguishability p0 pi−1 pi+1 pi pn−1
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
1 0 0 0 1 αi αi n−1 p0 pi−1 pi+1 pi pn−1 1 1 0 0 1 ≈ > ≈ βi 0 αi+1 > ≈ # Indistinguishability p0 pi−1 pi+1 pi pn−1 1 0 0 0 1 αi p0 pi−1 pi+1 pi pn−1 1 1 0 0 1 ≈ αi+1
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
Title: Consensus using omega in asynchronous systems with unknown membership and degenerative Byzantine failures URL Source: Markdown Content: Typesetting math: 100% Skip to main content Skip to article Journals & Books View PDF Download full issue Outline Abstract Keywords 1. Introduction 2. The asynchronous system S 3. The omega failure detector class (Ω) 4. Reliable FIFO order broadcast (RFLOB) 5. Consensus in S with Ω and RFLOB 6. On the implementability of Ω in a system with degenerative Byzantine failures 7. Conclusions Declaration of Competing Interest References Show full outline Cited by (1) Figures (3) Journal of Computer and System Sciences Volume 107, February 2020, Pages 54-71 Consensus using omega in asynchronous systems with unknown membership and degenerative Byzantine failures☆ Author links open overlay panel Ernesto Jiménez a , José Luis López-Presa b , Javier Martín-Rueda b Show more Add to Mendeley Share Cite Get rights and content Under an Elsevier user license Open archive Abstract We study consensus in asynchronous systems where membership is unknown, and where up to f degenerative Byzantine failures can happen. In our failure model a faulty process can have a Byzantine behavior (i.e., it deviates from its specification), and, furthermore, every faulty process will degenerate such that eventually it will have permanent physical or transmission failures. We present a simple algorithm that solves Consensus using the Omega failure detector and a new broadcast primitive called RFLOB in an asynchronous system with degenerative Byzantine failures, which is optimal with respect to failures because it works when 𝑓 < 𝑛 / 3 . RFLOB guarantees reliable, FIFO and local order broadcast in systems with Byzantine processes and unknown membership. Finally, we present an algorithm that implements an Omega failure detector with unknown membership and minimum connectivity (i.e., communication reliability, and synchrony properties) in a
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
system with degenerative Byzantine failures. Previous article Next article Keywords Distributed algorithmsConsensusUnknown membershipOmega failure detectorByzantine failures 1. Introduction Consensus is a fundamental agreement problem present, as a building block, at the core of most reliable systems. Consensus states that, from the set of values proposed by the processes in a system, only one of these values can be decided. Consensus has been studied under different failure models. The most commonly used is the crash failure model . In this failure model, processes cannot deviate from their specification, and a process can only fail by crashing permanently. It is well known that Consensus cannot be solved in an asynchronous system, even if it is restricted to crash failures and only one process can crash . To overcome this impossibility result, asynchronous systems can be augmented with failure detectors . A failure detector is a distributed device that returns information related to faulty processes. More precisely, a failure detector provides suspicions about processes which may have failed. However, failure detectors can make mistakes, e.g. by erroneously suspecting a non-faulty process (also called a correct process), or by not suspecting a failed one. However, to be useful, it is required that failure detectors fulfill completeness and accuracy properties. The completeness property requires that non-faulty processes must eventually suspect all faulty processes, and the accuracy property restricts the erroneous suspicions that can be returned by the failure detector. Chandra and Toueg in their seminal work proposed eight different classes of failure detectors, and showed that any of them can be used to solve Consensus in asynchronous systems with the crash failure model. All these classes satisfy the weak completeness property. Weak completeness requires that there is a time after which some correct process permanently suspects all crashed processes ( ◇ ◇ 𝑆
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
is one of these 8 original classes). In a follow-up work, Chandra, Hadzilacos and Toueg proposed Ω as a new class of failure detectors. The Ω failure detector class guarantees that, eventually, the failure detectors of all correct processes permanently return the same correct process. It is shown in that Ω is the weakest failure detector that can be used to solve Consensus in asynchronous systems with the crash failure model since the information returned by Ω about faulty processes is the minimum necessary to solve Consensus. Traditionally, in asynchronous systems where consensus and failure detectors are studied, it is assumed that the membership is known, i.e. every process initially knows the identity of all the processes in the system. When the membership is known, Ω and ◇ ◇ 𝑆 are equivalent, i.e. a failure detector of Class Ω is also a failure detector of Class ◇ ◇ 𝑆 and vice versa. It is shown in that the assumption of the knowledge of the membership is not banal. In fact, it is proved there that, with unknown membership, no failure detector class with weak completeness can be implemented. Hence, none of the original 8 classes proposed by Chandra and Toueg in can be implemented. Interestingly, in it is also showed that Ω can indeed be implemented without membership knowledge. This implies that Ω and ◇ ◇ 𝑆 failure detector classes are not equivalent when membership is unknown. Roughly speaking, the reason why Ω is implementable, while classical failure detector classes are not, is that, in Ω every correct process only needs to know the identity of one correct process, while in ◇ ◇ 𝑆 every correct process needs to know the identity of every faulty process. Thus, although initially unknown, correct processes will eventually know
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
each other because, since they do not crash, they can permanently send messages that eventually and periodically will be received by the other. Conversely, a process which crashes before sending any message will be unknown to every other process in the system. Another failure model very well studied in the literature is the Byzantine failure model : faulty processes (also called Byzantine processes) can deviate from their specification and execute whatever operation they want, or even crash permanently. Hence, Byzantine processes may behave in a totally arbitrary way. Thus, this model considers any type of behavior of faulty processes. Hence, the crash failure model is a subset of the Byzantine failure model because the latter includes all failures in the first. In other words, the Byzantine failure model is more general than the crash one. Thus, all impossibility results that hold for the crash failure model also hold for the Byzantine failure model. In particular, the impossibility of achieving consensus with one faulty process also holds for Byzantine processes. Failure detectors are a way to circumvent this impossibility. In and the authors observed that the failure detector paradigm cannot be decoupled from the algorithm that uses it if the failure model is extended to Byzantine processes. They showed an inherent circular dependency because the only way for a failure detector FD to be able to return all the suspicions is that the algorithm 𝐴 which uses the failure detector cooperates informing FD of failures. Hence, the failure detector FD needs to know how 𝐴 is implemented. They introduce the ◇ ◇ 𝑀 𝐴 failure detector class with eventual mute completeness and eventual weak accuracy properties. Eventual weak accuracy guarantees that, eventually, some correct process is not suspected anymore by any other correct process. Eventual mute completeness states
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
that every correct process p eventually suspects all processes that are mute to p with respect to 𝐴 . A process p is mute with respect to 𝐴 if p eventually stops sending 𝐴 's messages to at least one process in the system. Consensus is solved in introducing another muteness failure detector class called ◇ ◇ 𝑃 mute with muteness strong completeness and eventual strong accuracy properties. Eventual strong accuracy states that, eventually, no correct process is suspected anymore by any correct process. Muteness strong completeness guarantees that, eventually, all mute processes are suspected by all correct processes. In the authors introduce consensus in asynchronous systems enriched with the ◇ ◇ 𝑆 ( bz ) failure detector class for Byzantine processes. The ◇ ◇ 𝑆 ( 𝑏 𝑧 ) failure detector class has two properties: strong completeness and eventual weak accuracy (already described in the context of ◇ ◇ 𝑀 𝐴 ). Strong completeness specifies that every correct process eventually suspects all quiet processes. A process p is quiet if some process in the system eventually stops receiving messages from p. In an algorithm 𝐴 is introduced to solve Consensus in asynchronous systems with the Byzantine failure model and the ◇ ◇ 𝑊 ( Byz ) failure detector class. This failure detector class has eventual weak accuracy (already described) and eventual weak Byzantine ( 𝑘 + 1 ) -completeness properties. Eventual weak Byzantine ( 𝑘 + 1 ) -completeness property specifies that at least 𝑘 + 1 correct processes eventually suspect every process that has deviated from algorithm 𝐴 . Not surprisingly, due to the inherent circular dependency detected in and , every consensus algorithm 𝐴 of the previous papers requires synchrony constraints in cooperation with the failure detector FD, which does not allow 𝐴
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
to be executed in a pure asynchronous system augmented with this failure detector FD , , , , . Note that all previously presented failure detector classes ( ◇ ◇ 𝑃 mute , ◇ ◇ 𝑀 𝐴 , ◇ ◇ 𝑆 ( bz ) and ◇ ◇ 𝑊 ( Byz ) ) cannot be implemented when the membership is unknown because every correct process needs to know every mute (quiet or deviated) process. However, these processes might crash before sending any message, preventing correct processes in the system from knowing their existence. Almost all papers on consensus in systems with Byzantine failures use the rotating coordinator paradigm , , , , , . Using this method, the consensus algorithm works in rounds. At each round, one process in the system acts as the coordinator, which must be known by every process in the system. The order in which the coordinators are chosen as well as the membership must be known by every process in the system to be able to advance to the next round. 1.1. Our contributions In this paper we present a consensus algorithm for asynchronous systems where each process initially only knows its own identity and the size of the system, but ignores the identity of the other processes in the system. There are several advantages in solving consensus in systems with these features. From a practical point of view, consensus algorithms that work with unknown membership execute the same code in runs with different sets of processes in the system, without the need to statically change the set of addresses of the processes that participate in each run. Another advantage is to solve consensus in systems where the huge number of processes and its dynamism make it impossible to know the membership a priori (e.g. wireless
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
mobile ad-hoc networks, sensor networks, etc.). This implies that every process in our system with unknown membership has to dynamically learn about the existence of other processes. Roughly speaking, a process can infer the identity of other processes from the messages it receives during the run. An important handicap to be overcome by systems without knowledge of the membership is when a process crashes before sending any message, because it would be permanently unknown to any other process. The failure model of our paper follows that of , with an extension to omission failures. In every faulty process is a Byzantine process such that there is a time after which it crashes permanently. They call them mortal Byzantine failures. We say in our model that we have degenerative Byzantine failures because each faulty process eventually degenerates and makes a physical or a transmission failure permanently. A physical failure is that in which the process crashes permanently (as a mortal Byzantine failure of ), for example due to a system fault from which it is impossible to recover. A transmission failure arises when a process omits a communication operation, either sending or receiving protocol messages (involuntary, for example when the network is congested, or voluntary in a Byzantine behavior of the process). The main advantage of our degenerative Byzantine failure model and with respect to that of is that, similarly to the crash failure model of , Consensus can be solved in a pure asynchronous system augmented with a failure detector such that both of them are totally decoupled. In other words, the failure detector can be seen as a “black-box” that the asynchronous consensus algorithm can use based only in its properties, and independently of its implementation. This independence between abstractions allows simplifying the design, development, correctness
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
and maintenance of algorithms because we can focus on the properties to be fulfilled in a certain abstraction based only on the properties supplied by the other , , . So, in our model (as in ) we can interchange different algorithms of the Ω failure detector class with different algorithms of consensus that use Ω without worrying about how they have been implemented and how they have to be interconnected to work properly. As the authors of note, their failure model with mortal Byzantine failures can be seen as a system in which, when Byzantine failures occur, there is another component outside the system that eventually detects these failures and stops them. They also present several practical examples especially focused on space systems where the misbehavior is detected by external units (ground operator and/or hardware units) which disable permanently the faulty units. One of these is the European automated transfer vehicle (ATV) with a set of computing elements and checking mechanisms whose task is to halt each damaged element when a failure is detected. Another example is focused on systems where a human operator halts one element of the system when a wrong behavior is detected. The degenerative Byzantine failure model we present in this paper extends the model of mortal Byzantine failures in with processes that eventually make omission failures permanently. This type of failures complements the failure model in by adding transmission failures to physical failures. In our model not all faulty processes eventually crash; there could be faulty processes alive that can be undetectable for an external unit. For example, there could be a process that sends messages permanently but does not receive any. We call it a deaf process. If a process can receive messages but cannot send any, we call it
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
a mute process. Finally, a process can be alive and neither send nor receive messages. In this case we call it an autistic process. Note that mute and autistic processes are difficult to detect by an outside unit. If they never send a message, they are undetectable in a system with unknown membership. Besides, in an asynchronous system there is no way to infer that a process is mute, because it could just be slow. Furthermore, they might be assumed to have crashed since they stopped sending messages. Similarly, a deaf process may be perceived externally as a correct process because it sends messages periodically. With our model, we include these new types of failures preserving all the advantages present in . In , a consensus algorithm needs to know the membership of the system. Additionally, it needs all Byzantine processes to crash before making a decision. In our case, the consensus algorithm presented in this paper decides even when some faulty processes (deaf, mute or autistic) are still alive. Our algorithm is simple and optimal with respect to the number of failures because it works when 𝑓 < 𝑛 / 3 , being f the maximum number of faulty processes and n the number of processes in the system , . This algorithm uses the Ω failure detector adapted from to our failure model with degenerative Byzantine failures, and a broadcast primitive, called reliable FIFO and local order broadcast (RFLOB) , to communicate messages among processes. This RFLOB primitive guarantees that correct processes deliver the same sequence of messages if they were broadcast by the same (correct or not) process and the messages broadcast by each process are delivered by all correct processes following the same causal order (i.e., FIFO and local order ). The aim of the
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
introduction of this RFLOB primitive is to simplify the design and understanding of our consensus algorithm. Note that, in the literature, there are several implementations of broadcast primitives with different properties of reliability in systems with Byzantine processes , , , , , , but, to our knowledge, there is no implementation of a broadcast primitive with our reliability properties: FIFO, local order, and unknown membership in a system with Byzantine processes. Hence, for the sake of completeness, we also present and prove in this paper an algorithm that implements RFLOB in a system with unknown membership and where up to 𝑓 < 𝑛 / 3 processes are Byzantine. To our knowledge, all failure detector classes in systems with Byzantine failures present in the literature cannot be implemented when the membership is unknown ( ◇ ◇ 𝑃 mute , ◇ ◇ 𝑀 𝐴 , ◇ ◇ 𝑆 ( bz ) and ◇ ◇ 𝑊 ( Byz ) ). As we have mentioned above, this is so because every correct process needs to know every faulty process. However, in systems with unknown membership, these processes can fail before sending any message, preventing correct processes from detecting their presence in the system. In with mortal Byzantine failures, ◇ ◇ 𝑃 is used in a consensus algorithm although it cannot be implemented when the membership is unknown. This is so because it is one of the original 8 failure detectors of and its completeness property also forces it to know every faulty process . In this paper we present (and prove the correctness of) an algorithm that implements a failure detector of Class Ω in a system with unknown membership, minimum connectivity (i.e., the minimum communication reliability and synchrony properties) and degenerative Byzantine failures. To do so, we also prove
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
which are the minimum connectivity requirements necessary to implement any failure detector of Class Ω. Thus, we show that Consensus is implementable using Ω and RFLOB in a system with degenerative Byzantine failures even with unknown membership. 1.2. Structure of the rest of the paper This paper is organized as follows. Our model, with unknown membership and degenerative Byzantine failures, is given in Section 2. In Section 3 we refine the definition of the Ω failure detector class to cope with all failure types present in our system. In Section 4 we introduce the reliable FIFO local order broadcast (RFLOB) primitive for Byzantine systems, which we use to simplify solving Consensus, without strengthening the system model. We also provide an algorithm that implements RFLOB and prove its correctness. In Section 5 we provide an algorithm to solve the Consensus problem in our system and prove its correctness. In Section 6 we find weak conditions to implement Ω in a partially synchronous system with degenerative Byzantine failures. In particular, we find minimal conditions on the number of eventually-timely and lossy-asynchronous links. We also provide an algorithm to implement Ω under these conditions and prove its correctness. Finally, in Section 7 we provide concluding remarks. 2. The asynchronous system S In this section we describe the main features of the asynchronous system S. That includes the elements that form the system and the relationships among them. 2.1. Processes, communication, links and time Let S be a system formed by a finite set Π = { 𝑝 ℎ , 𝑝 𝑘 , … , 𝑝 𝑠 } of n processes (i.e., | Π | = 𝑛 ). The identities of the processes do not need to be consecutive. We assume that the membership is unknown but the size n is known. Hence, initially
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
each process 𝑝 𝑖 only knows its own identity (i.e., i) and the size n, but it does not know the identities of the other processes in the system. The only way a process has to infer the identities of other processes is from the messages it receives. Processes communicate by sending and receiving messages through links. Thus, every process 𝑝 𝑖 ∈ 𝑆 can send messages to every process 𝑝 𝑘 ∈ 𝑆 using a different and unidirectional link. A process 𝑝 𝑖 invokes the operation broadcast 𝑖 ( 𝑚 ) to send a copy of message m to all processes in the system S using each one of these links. If a process crashes while executing broadcast 𝑖 ( 𝑚 ) , a copy of m will be received by an arbitrary and unknown number of processes. Links are reliable : if a correct process 𝑝 𝑖 sends a message m, eventually each correct process 𝑝 𝑘 receives m once. Losses, changes, duplication or creation of spurious messages are not allowed in reliable links. However, messages may be delayed and delivered in a different order than they were sent. broadcast 𝑖 ( 𝑚 ) and received 𝑖 ( 𝑚 ) are the operations used by a process 𝑝 𝑖 to transmit a message m through these reliable links. The system S is asynchronous, that is, with regard to processes, we consider that processes of S execute by taking steps, and the time required to execute each step is finite but unbounded; regarding links, unless otherwise stated, the time a message needs to get from the sender process to the receiver process is finite but unbounded. 2.2. Failure model Let f be the maximum number of processes in Π that can fail. In the system S up to 𝑓
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
< 𝑛 / 3 processes can be faulty. We consider the following three types of failures. • Crash A process 𝑝 𝑖 ∈ Π is denoted crashed if 𝑝 𝑖 always executes every operation specified by its algorithm until 𝑝 𝑖 eventually fails by crashing permanently (i.e., 𝑝 𝑖 stops taking steps forever). The set of crashed processes is denoted by Crashed. • Omission A process 𝑝 𝑖 fails by omission if 𝑝 𝑖 deviates from its algorithm, but only by avoiding to execute communication operations. The set of omitting processes is denoted by Omitting. From all possible omission failures, we solely consider in S the following three subtypes. – Deaf A process 𝑝 𝑖 is denoted deaf if 𝑝 𝑖 eventually stops executing all receiving operations. The set of deaf processes is denoted by Deaf. – Mute A process 𝑝 𝑖 is denoted mute if 𝑝 𝑖 eventually stops executing all sending operations. The set of mute processes is denoted by Mute. – Autistic A process 𝑝 𝑖 is denoted autistic if 𝑝 𝑖 eventually stops executing all sending and receiving operations. The set of autistic processes is denoted by Autistic. • Mortal Byzantine A process 𝑝 𝑖 is denoted mortal Byzantine if it deviates from its algorithm by executing arbitrary operations whenever 𝑝 𝑖 wants, but eventually 𝑝 𝑖 crashes permanently. The set of mortal Byzantine processes is denoted by MortalByz. A process 𝑝 𝑖 ∈ Π is correct if it never makes a failure. Otherwise, 𝑝 𝑖 is faulty. Thus, we denote the set of correct processes by Correct and the set of faulty processes by Faulty. Observe that every Autistic process eventually stops executing sending and receiving operations. Hence, Autistic = Deaf ∩ Mute . Besides, every crashed process eventually stops executing sending
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
and receiving operations, so Crashed ⊆ Autistic . Note also that a crashed process is also a mortal Byzantine. However, since not all autistic processes crash, Autistic ⊈ MortalByz , so Deaf ⊈ MortalByz and Mute ⊈ MortalByz . Thus, Faulty = MortalByz ∪ Deaf ∪ Mute . Hence, our failure model is broader than that of . Note that in our system S we only consider what we call degenerative Byzantine failures, that is, when a process 𝑝 𝑖 starts failing, it degrades and it will eventually fail permanently (by either crashing or omitting communication operations). 2.3. Relation between failures, identities and communication Now that we have introduced the failure model we can go back to explaining in more depth the behavior of some communication features of system S. If a correct process 𝑝 𝑖 invokes 𝑏 𝑟 𝑜 𝑎 𝑑 𝑐 𝑎 𝑠 𝑡 𝑖 ( 𝑚 ) , a copy of message m certainly will be sent to all processes in the system S using the corresponding unidirectional link. If 𝑝 𝑖 is a crashed or omitting process, there is no guarantee that a copy of m will be sent to every process in S. Finally, note that a mortal Byzantine process 𝑝 𝑖 executing 𝑏 𝑟 𝑜 𝑎 𝑑 𝑐 𝑎 𝑠 𝑡 𝑖 ( 𝑚 ) will not even guarantee that exact copies of m will be sent to all processes. For example, let 𝑝 𝑖 , 𝑝 𝑘 , 𝑝 𝑠 , 𝑝 𝑟 ∈ Π , let 𝑝 𝑖 ∈ MortalByz , if 𝑝 𝑖 issues 𝑏 𝑟 𝑜 𝑎 𝑑 𝑐 𝑎 𝑠 𝑡 𝑖 ( 𝑚 ) , it can send m to process 𝑝 𝑘 , 𝑚 ′ ≠ 𝑚 to process 𝑝 𝑠 , and nothing to process 𝑝
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
𝑟 . We assume that each process 𝑝 𝑖 can identify the sender process 𝑝 𝑘 of every message m (this can be done, for example, using digital signatures included in the header of m). Hence, if a correct process 𝑝 𝑖 receives a message whose sender address does not correspond to the signature, it will consider that message invalid and will discard it upon reception, so the algorithms proposed only need to consider valid messages. In our system S we assume that messages can be uniquely identified. Thus, if a message m is received by a correct process more than once, subsequent copies of m may be identified and discarded upon reception, so the algorithms proposed do not need to deal with multiple copies of the same message. Since we assume that links are reliable in system S, the network will not generate duplicates of messages. However, mortal Byzantine processes might send multiple copies of the same message to some process or even corrupt messages (messages that do not follow the protocol specification). For the sake of simplicity, we assume that, if the receiver is a correct process, it will discard every duplicate or corrupt message upon reception. Finally, observe that, if 𝑝 𝑖 is a mortal Byzantine process, it might not discard the invalid, duplicate or corrupt messages and can process them in whatever way it wants. The behavior of a mortal Byzantine process is unpredictable. 3. The omega failure detector class (Ω) It is very well known in the literature the result of that states that Consensus cannot be solved in asynchronous systems under the crash failure model where processes are benign and at least one process may crash. In this failure model, we say that processes are benign (as apposite to Byzantine) because they
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
cannot deviate from their specification except by crashing. Therefore, since our degenerative Byzantine system S also includes the benign crash failures of , this impossibility result holds for S. To circumvent this impossibility, failure detectors were introduced . A failure detector is a distributed device that returns information related to faulty processes. Ω stands out among the other failure detector classes because it is the weakest failure detector class that allows solving Consensus when a majority of processes never crash . Ω is the weakest class because the information returned by this failure detector is necessary and sufficient to solve Consensus . The systems defined in , , use the benign crash failure model of . In this benign crash failure model processes only fail by crashing (i.e., physical failure). The specification of the Ω failure detector class in these systems states that an Ω failure detector eventually outputs the same non-crashed process 𝑝 𝑙 (as leader) to all non-faulty processes. In our system S, faulty processes include crashed, deaf, mute, autistic and mortal Byzantine processes. None of them should be chosen as an eventual leader by the failure detector. Mortal Byzantine and crashed processes can be detected by traditional Ω failure detectors. However, in our system s, we need to avoid choosing as leader a deaf, mute or autistic process. Hence, our failure detector needs to cope with transmission failures as well as physical failures. Therefore, we need to adapt the definition of to include the transmission failures of system S. Roughly speaking, the Ω failure detector class in our system S eventually outputs as leader the same correct process 𝑝 𝑙 to all correct processes. More formally, Definition 1 Ω A failure detector D, with a function leader 𝑖 ( ) that returns a process identity
|
{
"page_id": null,
"source": 7323,
"title": "from dpo"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.