Everything you need to know about Optimistic Rollup

0

 182 total views

Georgios Konstantopoulos, a research partner at Paradigm, analyzed the incentive structure of Optimistic Rollup and responded to common criticisms.

Written by: Georgios Konstantopoulos, research partner of Paradigm, a crypto venture capital firm.
Compiler: Zhan Juan
Paradigm authorized Chain Wen to translate and publish the Chinese version of this article.

In the Ethereum ecosystem, one of the biggest challenges is how to achieve low latency and high throughput under severe resource constraints (such as CPU, bandwidth, memory, and disk space).

The decentralization of the system is determined by the ability of the weakest node in the network to verify the rules of the system. High-performance protocols that can run on low-resource hardware can be called ” scalable “.

In this article, we will delve into the principles of current “two-layer solutions”, their corresponding security models, and how they solve the scalability problem of Ethereum.

If you are interested in learning more about the cutting-edge Ethereum’s scalable technology, and want to know how to build and structure such a system, then this article may be helpful to you.

Throughout the article, important keywords or concepts will be highlighted in bold. These are the words/terms you will encounter in the process of learning cryptocurrency knowledge. This topic is very complicated. Maybe you will feel a little confused in the process of reading, but as long as you keep reading, I believe you will gain something.

Blockchain resource requirements

There are three factors that affect the resource requirements of running nodes in decentralized networks (such as Bitcoin and Ethereum):

  • Bandwidth: The cost of downloading and broadcasting any blockchain-related data.
  • Calculation: The cost of running calculations in scripts or smart contracts.
  • Storage: The cost of storing transaction data for indexing, and the cost of storing “state” to continue processing new transaction blocks. It is worth noting that storing “state” (account balance, contract bytecode, nonce value) is more expensive than storing raw transaction data.

There are 2 ways to measure performance :

  • Throughput: The number of transactions that the system can process per second.
  • Latency: The time required for transaction processing.

The ideal attribute of emerging encryption networks such as Bitcoin and Ethereum is decentralization, but what are the elements that make the network decentralized?

  • Low trust: This attribute allows any individual to verify that the number of bitcoins will not exceed 21 million, or that their bitcoins will not be forged. The person who runs the node software independently calculates the latest status and verifies that all the rules are followed in the process.
  • Low cost: If the operating cost of the node software is high, individuals will rely on a trusted third party to verify the status. High cost means high trust demand, which is what we want to avoid in the first place.

Another required attribute is scalability : the ability to scale throughput and latency to the cost of running the system in a super-linear manner. This definition is good, but it does not include “trust”. Therefore, we have clarified “decentralized scalability”: to achieve scalability without significantly increasing the assumption of system trust.

To zoom in, the runtime environment of Ethereum is the Ethereum Virtual Machine (EVM). Transactions running through the EVM perform various operations at different costs, for example, the cost of storage operations is higher than the cost of adding operations. The calculation unit in the transaction is called “gas”, and the system parameters are set to process 12.5m gas at most per block, and a transaction block is generated every 12.5 seconds on average. Therefore, Ethereum has a latency of 12.5 seconds and a throughput of 1 million gas per second.

You may ask: What benefits can 1 million gas per second bring?

  • ~47 “simple transfer” transactions per second. These transactions consume 21,000 gas and are responsible for transferring ETH from A to B. They are the simplest type of transaction.
  • ~16 ERC20 token transfers per second. Compared with ETH transfers, these methods involve more storage operations, so each time the cost is ~60k gas.
  • ~10 Uniswap asset transactions per second. The average cost of a token-to-token transaction is about 102k gas.
  • …Just pick the gas cost of a transaction you like, and divide it by 1m (12.5m / 12.5 /gas)

Note that as the execution complexity of the transaction increases, the throughput of the system will decrease to a very low value. There is room for improvement!

Solution 1: Use an intermediary

We can use a trusted third party to facilitate all transactions. In this way, we can get very high throughput, and the delay may only be sub-second. Great! This will not change any system-wide parameters, but we will choose to join the trust model unilaterally set by a third party. They may choose to censor us or even confiscate our assets. This is not desirable.

Solution 2: Make the block bigger and more frequent

We can reduce the delay by reducing the time between two blocks, or we can increase the throughput by increasing the block gas limit. This change will make the cost of operating nodes higher and make it difficult for individuals to run nodes (this has already appeared on platforms such as EOS, Solana, and Ripple).

In Solution 1, the need for trust increases. In option 2, the cost increases. This eliminates them as a scalability option.

Starting from the first principle, rediscover Optimistic Rollup

In the next section, we assume that the reader already knows about hashes and Merkel trees .

Based on the knowledge we have learned so far, let us simulate a Socratic conversation. The goal is to find a protocol that can increase the effective throughput of Ethereum without increasing the burden on users and node operators.

Q: We want to expand Ethereum without significantly changing trust and cost assumptions. how should I do it?

Answer: We want to reduce the requirements of existing operations on system costs (see the three resource types above). In order to understand why this is not easy, we need to first look at the architecture of Ethereum:

Every node of Ethereum currently stores and executes every transaction submitted to it by users. During the execution process, the transaction runs through the EVM and interacts with the state of the EVM (such as storage, balance, etc.)-this operation is very expensive. Common smart contract optimization techniques focus on minimizing the number of interactions with the state, but they can only provide minor constant improvements.

Q: Are you saying that there is a way to trade without involving state, thereby keeping resource costs low?

Answer: In extreme cases, can we move all execution out of the chain while keeping some data on the chain? We can do this by introducing a third party called a ” sequencer “. They are responsible for storing and executing transactions submitted by users locally. In order to keep the system active, sequencers need to periodically submit the Merkel root of the transactions they receive and the state root generated on Ethereum. This is a step in the right direction, as we only store O(1) data in the state of Ethereum for O(N) off-chain transactions.

Q: So we can achieve scaling by letting the sorter calculate everything under the chain and only release Merkel roots?

Answer: Yes.

Q: Okay, so once you join, the sequencer can guarantee that your transfer fee is very cheap. So how will deposit and withdrawal work?

Answer: The user will enter the system by depositing money on Ethereum, and then the sorter will credit the corresponding amount to the user’s account. Users can propose such transaction content on Ethereum, for example, “I want to withdraw 3 ETH, and my account currently has >3 ETH, this is proof”. Even if there is no actual user status on the first level, users can still show the related Merkel certificate issued by the sequencer, indicating that they have sufficient funds in the current state.

Q: Now we know that users need Merkel certificates to withdraw their funds. How do users obtain the data to construct Merkel’s proof?

Answer: They can ask the sorter to provide them with data!

Q: What if the sorter is temporarily or permanently unavailable?

Answer: The sorter may be malicious, or it may be offline just due to technical issues, which will cause performance degradation (or worse, steal your assets!). Therefore, we must also require the sequencer to submit complete transaction data on the chain for storage, but it cannot be executed. On this issue, the goal is to obtain data availability. Assuming that all data is permanently stored on Ethereum, even if the sorter disappears, the new sorter can retrieve all layer 2 related data from Ethereum, reconstruct the latest layer 2 state, and leave from their predecessor Where to continue.

Q: If the sorter is online but refuses to provide me with Merkel proof data, can I download it from Ethereum?

Answer: Yes, you can synchronize an Ethereum node yourself, or connect to one of the many hosting node services.

Q: One thing I still don’t understand, how can you store things in Ethereum without executing it? Doesn’t every transaction go through the EVM?

Answer: Suppose you submit 10 transactions and transfer ETH from A to B. Performing each transaction will perform the following operations: increase A’s nonce, decrease A’s balance, and increase B’s balance. This requires quite a lot of writing and reading from the world state of Ethereum. Instead, you can send all transaction codes to the publish (bytes _transactions) public {} function of the smart contract. Note that the body of this function is empty! This means that the published transaction data will not be interpreted, executed, and will not be accessed anywhere, it is only stored in the historical log of the blockchain (the cost of writing is very low).

Q: Can we trust the sorter? What if they post an invalid state transition?

Answer: Any time the sequencer publishes a batch of state transitions, there will be a “dispute period” during which any party can issue a ” fraud certificate ” to show that one of the state transitions is invalid. This can be proved by replaying the transaction that caused the state transition on the chain and comparing the resulting state root with the state root issued by the sequencer. If the state root does not match, the fraud proof is successful and the state transition is cancelled. If there are more state transitions after the invalid state transition, they will also be cancelled. But if the issues have passed the dispute period, they can no longer be questioned and they will be considered final transactions.

Q: Wait a minute! You said before that if a) increases costs, or b) introduces new trust assumptions, then it is not scalability. In the scenario you describe, don’t we also assume that someone will always report fraud?

Answer: Yes. We assume that there is an entity called a “validator” who is responsible for monitoring fraud. If the level 1 and level 2 states do not match, they will publish evidence of fraud. We also assume that the validator can reliably obtain the fraud evidence contained in Ethereum within the dispute period. We believe that the existence of validators is a “weak” hypothesis. Imagine if there are thousands of users in an application, you only need one person to run a validator. This doesn’t sound too ridiculous! On the other hand, changing the trust model of Ethereum or increasing the operating cost of Ethereum nodes is a “strong” assumption change that we do not want to make. This is what we call the “hypothesis that significantly changes the underlying system” when we define decentralized scalability.

Q: I agree that someone will run a validator, because all parties can benefit from the success of this new solution. But of course this also depends on the actual operating cost. So what are the resource requirements for running a validator and a sequencer?

Answer: The sequencer and validator must run an Ethereum full node (not an archive node), a complete layer 2 node, to generate the layer 2 state. The software run by the verifier is responsible for creating fraud proofs, and the software run by the sorter is responsible for bundling and publishing user transactions.

Q: Is that so?

Answer: Yes! Congratulations! You have rediscovered the Optimistic Rollup (Optimistic Rollup is a combination of “Optimistic Contract” and “Data Availability on the Chain” (aka “Data Aggregation”), which is the most anticipated scaling solution for 2019-2021. The reason is easy to understand, because it is the final product of many years of research in the Ethereum community, and you should have already experienced this in the short dialogue above.

Optimistic incentives

The extension of layer 2 is based on the fact that we are trying to minimize the number of on-chain transactions executed. We use fraud proofs to cancel any invalid state transitions that may occur. Since fraud proofs are transactions on the chain, we also hope to minimize the number of fraud proofs issued on Ethereum. In an ideal situation, fraud has never occurred, and therefore, proof of fraud has never been issued.

We curb fraud by introducing fidelity bonds . In order for users to become sorters, they must first issue a bond on Ethereum, and if it is proven fraudulent, they will lose the bond. In order to incentivize individuals to detect fraud, the sorter’s bonds will be distributed to verifiers after being cut.

Loyalty bonds and dispute period

When designing an incentive mechanism for fraud evidence, there are two parameters that need to be adjusted:

  • Loyalty bond scale: The sorter must announce the number distributed to the verifier. The larger the scale, the greater the incentive to become a verifier, and the smaller the incentive to commit fraud as a sorter.
  • Dispute period: the time window during which fraud proofs can be issued. After this time window, layer 2 transactions are considered safe on layer 1. The longer dispute period can provide better security guarantees for preventing censorship attacks. A shorter dispute period can create a good user experience for users who withdraw from the second tier to the first tier, because they do not need to wait a long time to reuse their funds on the first tier.

In our opinion, an exact static value cannot be found for these two parameters. Perhaps 10 ETH bonds and a 1-day dispute period are sufficient. But maybe 1 ETH and 7 days are enough. The real answer is that it depends on the motivation to become a validator (which depends on the operating cost) and how easy it is to issue a fraud proof (which depends on the level of congestion at the first layer). Both should be adjustable, either manually or automatically.

It is worth mentioning that EIP1559 introduces a new BASEFEE opcode to Ethereum, which can be used to estimate congestion on the chain and therefore programmatically adjust the duration of the dispute period.

How to properly implement this punishment mechanism is very important, otherwise it will be abused in practice. Let me give an example to illustrate what a naive and non-practical realization is:

  1. Alice posted a 1 ETH bond, so she was able to act as a sorter in the system
  2. Alice posted a fraud status update
  3. Bob noticed this and published a controversy. If successful, this will grant 1 ETH in Alice’s bond to Bob and cancel the fraud status update
  4. Alice noticed this controversy and also published a controversy (challenge herself!)
  5. ALice got her 1 ETH, even if she tried to defraud, she was actually not punished.

Alice can launch this attack by “snatching away”, that is, broadcasting the same transaction as Bob, but with a higher gas price, causing Alice’s transaction to be executed before Bob. This means that Alice can continue to try to cheat at the lowest cost (only Ethereum transaction fees).

The solution to this problem is simple: instead of granting all the bonds to the disputer, X% of the bonds are burned. In the above example, if we burn 50%, Alice will only get 0.5 ETH, which is enough to prevent her from trying to cheat in step 2. Of course, destroying the bond reduces the incentive to run the validator (because the payment becomes less), so it is necessary to ensure that the bond is still sufficient to motivate the user to become a validator after the burned part.

Common criticisms of Optimistic Rollup and our response

Now that we have completed the building blocks of Optimistic Rollup, let us explore and resolve the most popular criticisms of this mechanism.

Long withdrawal/dispute periods are fatal to adoption and composability

As we mentioned above, a long dispute period is good for security. There seems to be an inherent trade-off: a long dispute period is not good for users, because users have to wait a long time if they want to withdraw funds. The dispute period is short, of course, it can bring a smooth user experience, but then you will take the risk of fraud.

We think this is not a problem. Due to the possibility of long withdrawal delays, we expect that market makers will quickly step in and provide faster withdrawal services. This is possible because people who verify the level 2 status can correctly determine whether a withdrawal is fraudulent, and will “purchase” their services at a smaller discount. for example:

Participants:

  • Alice: There are 5 ETH on the second layer.
  • Bob: In the “market maker” smart contract, there are 4.95 ETH on the first layer, and a validator is running on the second layer

step:

  1. Alice let Bob know that she wants to withdraw money “fast” and pay him a fee of 0.05 ETH
  2. Alice sends a cash withdrawal request to Bob’s “market maker” smart contract
  3. Two things can happen at this time:
    1. Bob checks whether the withdrawal is valid on his tier 2 validator and approves the quick withdrawal. This will immediately transfer 4.95 ETH to Alice’s layer 1 address. After the withdrawal period ends, Bob can receive these 5 ETHs and make a net profit.
    2. Bob’s verifier reminds him that the transaction is invalid. Bob disputes the state transition caused by the transaction, cancels the transaction, and earns his bonds because the sorter allows malicious transactions to occur.

Alice is either honest and takes the money out immediately, or she is dishonest and will be punished. We hope that the fees paid to market makers will be compressed over time. If there is a demand for this service, users will eventually lose sight of this process.

The most important implication of this feature is that it can achieve composability with the first-level contract without having to wait for a complete dispute period.

Note that this technique was first in the ” Simple Fast Withdrawals(quick and easy cash withdrawal) described in an article.

Miners can use bribery to review cash withdrawals and undermine the security of Optimistic Rollup

In the post of “Optimistic Rollup’s Near-Zero Cost Attack Scenario”, some people believe that the incentives of miners are too large, which will cause sorters to collude with Ethereum miners and are unwilling to review any disputed transactions. Of course, considering the dependence of system security on dispute resolution, this is fatal to any Optimistic system.

We disagree with the argument of this article. We assume that the honest party is always willing to bribe miners and can provide more funds than the malicious party. In addition, every time miners deviate from “honest” behavior because they help malicious parties win, additional costs are incurred. This behavior will destroy the value of Ethereum, which in turn will increase the additional cost of miners engaged in such behavior.

In fact, this situation has been studied in the academic literature, proving that “the threat of counterattack induces a perfect equilibrium of subgames where no attack occurs.”

We would like to thank Hasu for letting us take note of the argument of this paper.

The verifier’s dilemma inhibits operating the verifier and destroying Optimistic Rollup

In response to the verifier’s dilemma, Ed Felten wrote an excellent analysis and solution , which we summarize as follows:

  1. If the system’s incentives work as expected, no one will cheat
  2. If no one is cheating, then there is no point in running a verification program, because you can’t make money from operating it
  3. Because no one runs the validator, there is a chance that the sorter will eventually cheat
  4. The sorter cheated and the system no longer works as expected

This sounds important and almost contradictory. Under the assumption that the reward scale is fixed, more validators will reduce the expected reward of a single validator. In addition, with more verifiers, the amount of cake that can be allocated seems to decrease, and fewer frauds occur, which further exacerbates the problem. In subsequent analysis, Felten also provided a way to solve the verifier’s dilemma .

I want to take the opposite position here. I think the verifier’s dilemma is not as important as the critics say. In practice, there is a non-monetary incentive to be a validator. For example, you may build a large application on an aggregation platform, or you may hold tokens. If the system fails, your application will not run, or the tokens you hold will depreciate. In addition to this, the need for quick withdrawals creates an incentive for market maker verifiers to exist (as we saw in the previous section), which is not affected by fraud. To be more specific, Bitcoin does not provide any incentives to store the entire blockchain history or provide your local data to peers, but people still do it selflessly.

Even if running a validator in a vacuum environment does not meet the incentive mechanism, it can ensure the safety of the system, which is the most important thing for those entities that invest in the success of the system. Therefore, we believe that in the second layer of the Optimistic system, there is no need to design a mechanism to bypass the verifier’s dilemma.

in conclusion

We analyzed one of the technologies that will be crucial to Ethereum in 2021: Optimistic Rollup.

Summarize its benefits: Optimistic Rollup is an extension of Ethereum. It carries the security, composability and developer moat of Ethereum, while improving performance, and will not have a substantial impact on the cost or trust requirements of Ethereum users . We explored the incentive structures that make Optimistic Rollup work, and responded to common criticisms.

What we want to emphasize is that the upper limit of Optimistic Rollup performance is determined by the data published on the first layer. Therefore, its advantages are: 1) Compress the data you publish as much as possible (for example, aggregate by BLS signature ), 2) Have a large and cheap data layer (for example, ETH2).

As additional reading, we recommend Buterin’s incomplete guide on Rollup and trust models . We also recommend studying ZK Rollup, a close relative of Optimistic Rollup. Finally, there are other ways to obtain decentralized scalability, namely sharding and state channels, each of which has advantages and disadvantages.

Source link: research.paradigm.xyz