Liveness
Last updated
Last updated
Like the safety property deals with approval times, liveness deals with the .
Recall that inclusion time is not how long before the transaction is on the selected chain, but until it appears on the selected chain, and stays there for sufficiently long. This definition is a bit awkward, because you can't know in advance if the transaction is going to stay on the selected chain for sufficiently long. The only way to measure for a particular transaction is to wait until the transaction stayed on the selected chain for at least .
This should not bother you. The purpose of the quantity is to carry out analysis. Unlike the quantity , there is no need to know the quantity in advance to estimate how secure your transaction is. In the next section we consider confirmation times, and see that they are considered in terms of .
It's tempting to try and define like we did . That is, somehow along the lines of: liveness means that for any transaction, we will have that is sufficiently small.
However, that is not quite the case. There are two scenarios where we expect transactions to satisfy . When the transaction is only on an orphaned block, and when it is one of several conflicting transactions.
We can ignore this, and take liveness on an intuitive level as "conflicts are eventually resolved". For completeness, I include an optional discussion about how we could define it formally.
For brevity, let us denote when we consider a particular transaction.
The antichain condition, that we will now define, elegantly captures all problematic cases.
It is possible that a transaction only appeared on one block, and this block
Also, if a transaction happens to appear only on an orphaned block, it will never be on the selected chain:
To better discuss this, we will need a notion called a maximal antichain:
Definition: A collection of nodes in a DAG is called a maximal antichain if the chain of any tip goes through it exactly once
Assume a transaction appears on all the blocks of an antichain, like so:
In all fairness, this is not a huge limitation at all. Most commonly, transactions are posted to the mempool, which means that they would appear in many blocks. If all miners of parallel blocks had the same view of the mempool, there is a good chance the all included the same transactions. So if a transaction was posted with no conflict, it is typically a matter of time before it will occupy an antichain.
Recall that we always assume the worst-case reasonable attacker. In particular, in decentralized consensus, we assume an all-powerful byzantine attacker. In particular, the attacker is allowed to delay any message by as much as the network delay.
The attacker can use their ability to split the honest network into two chunks, such that each chunk contains about the same hash power (as long as the difference between them is smaller than the hash-rate of the attacker, the attack should work). She can then use her powers to delay any messages between the two chunks by as much as possible, effectively making them compete with each other.
If we assume for now that the attacker does not create blocks, after a while the neetwork would look a bit like this:
where one side of the conflict had a lucky burst large enough to look heavier even to the other side. In the example above, at this point the red group will switch to mine over the latest blue block, ending the split.
But what if the attacker in the meantime uses her hashrate to sprinkle blocks on both sides of the split?
If she has enough hash-rate (namely, her hashrate is bigger than the difference between the hashrates of two groups), she could strategically release her blocks to prevent the two groups from reconciliating indefinitely, ruining the liveness of the network.
Such attacks are called balancing attacks.
Note that balancing attacks work because the attacker blocks, despite not being on the longest chain, still add weight to one of the sides, making this attack very unique to GHOST.
What saves us from this attack is that if the block rate is sufficiently low, we will see long stretches where the network looks like a chain, prohibiting a balance attack below that chain.
So if a typical HCR block tree looks like this:
GHOST allows moderately increasing block rates to obtain a block tree that looks more like this:
The orphan rates increase without degrading safety, because their weight is counted into the chain, and without degrading liveness, because there are sufficiently common long stretches of orphanless chains, forcing the balancing adversary into a block race.
It is fair to say that GHOST sacrifices a bit of robustness against liveness attacks to gain much more resiliency against double-spending attacks when operating in orphan-inducing rates. This is why GHOST chains furnish security similar to Bitcoin with block delays as much as twenty times shorter.
In this case, we have that , and that's perfectly fine.
However, if the selected chains of all tips contain , then we do expect that should be sufficiently small.
Then in that case we do expect that remains small, but only as long as this condition keeps happening. If suddenly a new parallel block appears that does not contain appears, like so:
then our definition must accommodate the possibility that eventually the selected chain will not contain .
We also note that we do not require the transaction to be present in an antichain of blocks. Eventually, if it is in the selected chain for then the safety property furnishes confidence of it will not revert, regardless of antichains.
Another thing scenario we want to consider is that of conflicts. Say that and are two conflicting transactions that appear in the block tree like this:
We expect that or , but not both. We not? Because if we look at the blocks that contain either or we do get an antichain! We say that the collection of transactions is what we call unavoidable.
More generally, a list of transactions is called unavoidable if the set of blocks containing at least one transaction from contains an antichain. That is, if we choose any tip and traverse its selected chain to genesis, we are guaranteed to hit at least one transaction from at some point. In this case, we expect that there is at least one such that . We actually expect something stronger: that .
Compiling this into a definition is a bit delicate, as it requires taking into account the possibility that might become avoidable in the future. I defer this task to the exercises.
Imagine a blockchain that uses the GHOST protocol, and sets its block delays to be significantly smaller than the network delay, and assume the position of a attacker.
Now recall that blocks aren't created in regular intervals and in fact, the block creation process is . So even if the two groups are perfectly balanced, at some point we will run into a situation similar to this:
Another variant of this strategy attempts a attack: instead of creating blocks on both sides, the attacker sends transactions to the larger side, and mines blocks on the smaller side. After the network accepts her transactions as confirmed, she releases all the blocks on the lighter side, creating a reorg. The success probability of this attack does decrease exponentially over time, but not fast enough, since the balance attack increases arbitrarily.
In practice, such attackers do not exist. However, when operating in very high block rates, it is still the case that such "chunks" that are well connected among themselves but poorly connected with other chunks could naturally form. Identifying and responding to these chunks makes a balancing attack harder to pull off, but not impossible, as demonstrated in several simulations, most famously the .