Understanding BlockDAGs and GHOSTDAG
My WebsiteSupport My Work
  • Welcome!
  • Preface
  • Introduction
  • Part 1: BlockChains and BlockDAGs
    • Introduction
    • Chapter 1: From BFT to PoW
      • Byzantine Fault Tolerance
      • Proof-of-Work (PoW)
      • How PoW Works*
      • PoS Vs. PoW
      • Exercises
    • Chapter 2: the Block Chain Paradigm
      • A Graph Theory Primer
      • The Paradigm
      • Honesty and Rationality
      • Three Chain Selection Rules
      • Security Notions
      • A Security Model for Blockchain Consensus
      • Selfish Mining in Bitcoin
      • Safety
      • Liveness
      • Confirmation Times
      • Proving Bitcoin's Security*
      • Exercises
  • Part 2: The GHOSTDAG Protocol
    • Page 2
  • Supplementary Material
    • Math
      • Stuff You Should Know
        • Asymptotics, Growth, and Decay
        • Binomial Coefficients
        • The Binomial Formula
        • Geometric Sums and Series
      • Probability Theory
        • Probability Spaces
        • Random Variables
        • The Math of Block Creation
    • Computer Science
      • Stuff you Should Know
        • Asymptotic Notation
        • Negligible Functions
      • Cryptography
        • Random Oracles
        • Merkle Trees
        • One Time Pads
Powered by GitBook
On this page
  • Unavoidable Sets of Transactions*
  • Liveness of GHOST
  1. Part 1: BlockChains and BlockDAGs
  2. Chapter 2: the Block Chain Paradigm

Liveness

PreviousSafetyNextConfirmation Times

Last updated 1 month ago

Like the safety property deals with approval times, liveness deals with the L(α,ε)L(\alpha,\varepsilon)L(α,ε).

What we call inclusion time is not the same as what's typically called "first inclusion time". The first inclusion time measures how long before a transaction is posted to the mempool and until it appears on any block in the chain. The inclusion time measures the time from first inclusion until inclusion.

Recall that inclusion time is not how long before the transaction is on the selected chain, but until it appears on the selected chain, and stays there for sufficiently long. This definition is a bit awkward, because you can't know in advance if the transaction is going to stay on the selected chain for sufficiently long. The only way to measure L(α,ε)L(\alpha,\varepsilon)L(α,ε) for a particular transaction is to wait until the transaction stayed on the selected chain for at least S(α,ε)S(\alpha,\varepsilon)S(α,ε).

This should not bother you. The purpose of the quantity L(α,ε)L(\alpha,\varepsilon)L(α,ε) is to carry out analysis. Unlike the quantity S(α,ε)S(\alpha,\varepsilon)S(α,ε), there is no need to know the quantity L(α,ε)L(\alpha,\varepsilon)L(α,ε) in advance to estimate how secure your transaction is. In the next section we consider confirmation times, and see that they are considered in terms of S(α,ε)S(\alpha,\varepsilon)S(α,ε).

It's tempting to try and define L(α,ε)L(\alpha,\varepsilon)L(α,ε) like we did S(α,ε)S(\alpha,\varepsilon)S(α,ε). That is, somehow along the lines of: liveness means that for any transaction, we will have that L(α,ε)L(\alpha,\varepsilon)L(α,ε) is sufficiently small.

However, that is not quite the case. There are two scenarios where we expect transactions to satisfy L(α,ε)=∞L(\alpha,\varepsilon) = \inftyL(α,ε)=∞. When the transaction is only on an orphaned block, and when it is one of several conflicting transactions.

We can ignore this, and take liveness on an intuitive level as "conflicts are eventually resolved". For completeness, I include an optional discussion about how we could define it formally.

Unavoidable Sets of Transactions*

For brevity, let us denote Ltx(α,ε)L_{\mathsf{tx}}(\alpha,\varepsilon)Ltx​(α,ε) when we consider a particular transaction.

The antichain condition, that we will now define, elegantly captures all problematic cases.

It is possible that a transaction only appeared on one block, and this block

Also, if a transaction happens to appear only on an orphaned block, it will never be on the selected chain:

To better discuss this, we will need a notion called a maximal antichain:

Definition: A collection of nodes in a DAG is called a maximal antichain if the chain of any tip goes through it exactly once

Assume a transaction appears on all the blocks of an antichain, like so:

In all fairness, this is not a huge limitation at all. Most commonly, transactions are posted to the mempool, which means that they would appear in many blocks. If all miners of parallel blocks had the same view of the mempool, there is a good chance the all included the same transactions. So if a transaction was posted with no conflict, it is typically a matter of time before it will occupy an antichain.

Liveness of GHOST

Recall that we always assume the worst-case reasonable attacker. In particular, in decentralized consensus, we assume an all-powerful byzantine attacker. In particular, the attacker is allowed to delay any message by as much as the network delay.

The attacker can use their ability to split the honest network into two chunks, such that each chunk contains about the same hash power (as long as the difference between them is smaller than the hash-rate of the attacker, the attack should work). She can then use her powers to delay any messages between the two chunks by as much as possible, effectively making them compete with each other.

If we assume for now that the attacker does not create blocks, after a while the neetwork would look a bit like this:

where one side of the conflict had a lucky burst large enough to look heavier even to the other side. In the example above, at this point the red group will switch to mine over the latest blue block, ending the split.

But what if the attacker in the meantime uses her hashrate to sprinkle blocks on both sides of the split?

If she has enough hash-rate (namely, her hashrate is bigger than the difference between the hashrates of two groups), she could strategically release her blocks to prevent the two groups from reconciliating indefinitely, ruining the liveness of the network.

Such attacks are called balancing attacks.

Note that balancing attacks work because the attacker blocks, despite not being on the longest chain, still add weight to one of the sides, making this attack very unique to GHOST.

What saves us from this attack is that if the block rate is sufficiently low, we will see long stretches where the network looks like a chain, prohibiting a balance attack below that chain.

So if a typical HCR block tree looks like this:

GHOST allows moderately increasing block rates to obtain a block tree that looks more like this:

The orphan rates increase without degrading safety, because their weight is counted into the chain, and without degrading liveness, because there are sufficiently common long stretches of orphanless chains, forcing the balancing adversary into a block race.

It is fair to say that GHOST sacrifices a bit of robustness against liveness attacks to gain much more resiliency against double-spending attacks when operating in orphan-inducing rates. This is why GHOST chains furnish security similar to Bitcoin with block delays as much as twenty times shorter.

In this case, we have that Ltx(α,ε)=∞L_\mathsf{tx}(\alpha,\varepsilon)=\inftyLtx​(α,ε)=∞, and that's perfectly fine.

However, if the selected chains of all tips contain tx\mathsf{tx}tx, then we do expect that Ltx(α,ε)L_\mathsf{tx}(\alpha,\varepsilon)Ltx​(α,ε) should be sufficiently small.

Then in that case we do expect that Ltx(α,ε)L_\mathsf{tx}(\alpha,\varepsilon)Ltx​(α,ε) remains small, but only as long as this condition keeps happening. If suddenly a new parallel block appears that does not contain tx\mathsf{tx}tx appears, like so:

then our definition must accommodate the possibility that eventually the selected chain will not contain tx\mathsf{tx}tx.

We also note that we do not require the transaction to be present in an antichain of blocks. Eventually, if it is in the selected chain for S(α,ε)S(\alpha,\varepsilon)S(α,ε) then the safety property furnishes confidence of ε\varepsilonε it will not revert, regardless of antichains.

Another thing scenario we want to consider is that of conflicts. Say that tx\textsf{tx}tx and tx′\textsf{tx}'tx′ are two conflicting transactions that appear in the block tree like this:

We expect that Ltx(α,ε)=∞L_\mathsf{tx}(\alpha,\varepsilon) = \inftyLtx​(α,ε)=∞ or Ltx′(α,ε)=∞L_\mathsf{tx'}(\alpha,\varepsilon) = \inftyLtx′​(α,ε)=∞, but not both. We not? Because if we look at the blocks that contain either tx\textsf{tx}tx or tx’\textsf{tx'}tx’ we do get an antichain! We say that the collection of transactions {tx,tx′}\{\mathsf{tx},\mathsf{tx'}\}{tx,tx′} is what we call unavoidable.

More generally, a list of transactions TTT is called unavoidable if the set of blocks containing at least one transaction from TTT contains an antichain. That is, if we choose any tip and traverse its selected chain to genesis, we are guaranteed to hit at least one transaction from TTT at some point. In this case, we expect that there is at least one tx∈T\textsf{tx}\in Ttx∈T such that Ltx(α,ε)<∞L_\textsf{tx}(\alpha,\varepsilon)<\inftyLtx​(α,ε)<∞. We actually expect something stronger: that Ltx(α,ε)=O(1/log⁡ε)L_\textsf{tx}(\alpha,\varepsilon) = O(1/\log\varepsilon)Ltx​(α,ε)=O(1/logε).

Compiling this into a definition is a bit delicate, as it requires taking into account the possibility that TTT might become avoidable in the future. I defer this task to the exercises.

Imagine a blockchain that uses the GHOST protocol, and sets its block delays to be significantly smaller than the network delay, and assume the position of a 10%10\%10% attacker.

Now recall that blocks aren't created in regular intervals and in fact, the block creation process is . So even if the two groups are perfectly balanced, at some point we will run into a situation similar to this:

Another variant of this strategy attempts a 51%51\%51% attack: instead of creating blocks on both sides, the attacker sends transactions to the larger side, and mines blocks on the smaller side. After the network accepts her transactions as confirmed, she releases all the blocks on the lighter side, creating a reorg. The success probability of this attack does decrease exponentially over time, but not fast enough, since the balance attack increases LLL arbitrarily.

In practice, such attackers do not exist. However, when operating in very high block rates, it is still the case that such "chunks" that are well connected among themselves but poorly connected with other chunks could naturally form. Identifying and responding to these chunks makes a balancing attack harder to pull off, but not impossible, as demonstrated in several simulations, most famously the .

incredibly noisy
experiment by Natoli and Gramoli
A simple depiction of a balancing attack. Blue and red blocks represent the two isloated groups, and the blue curly bracket represents the network delay. Blocks that were created by one group but not yet arrived to the other groups have lighter colors.
inclusion time