The article is very inspiring, especially for the current situation of layer 2 development, the optimistic affirmation of Blob performance space, and the prospects for future sharding technology. It even pointed out some directions for optimizing layer 2. This article is adapted and compiled from the article “V God: Ethereum blobs are moving towards large-scale popularity, L2 needs to be improved in four major directions” by Chain Observer, translated and written by PANews.
Index of this article
How to deal with 50 million transactions every day?
Blob is not being used efficiently
Rollup security is not enough
How to use Blob space more effectively with Rollup
How to understand Vitalik Buterin’s new thoughts on Ethereum’s scalability? Some people say that Vitalik’s call for Blobscription is not aligned with mainstream opinions. So, how does the Blob data packet work? After the Cancun upgrade, why is Blob space not being utilized efficiently? Is DAS data availability sampling preparing for sharding?
In my opinion, post-Cancun upgrade, the performance is sufficient, and Vitalik is concerned about the development of Rollup. Why? Next, I will talk about my understanding:
I have explained many times in the past, Blob is a temporary data packet that can be directly accessed by the consensus layer and is decoupled from EVM calldata. The direct benefit is that EVM can execute transactions without accessing Blob data, thereby not generating higher execution layer calculation fees.
Currently, balancing a series of factors, one Blob size is 128kb. One batch of trades to the main network can carry up to two Blobs. Ideally, the final goal of a main network block is to carry about 128 Blob data packets, approximately 16MB per block.
Therefore, the Rollup project should strive to balance the number of Blob blocks, TPS transaction capacity, and the storage cost of Blob main network nodes. The goal is to use Blob space with the best cost-effectiveness ratio.
Taking Optimism as an example, there are currently about 500,000 transactions per day, with an average batch every 2 minutes. Each batch carries one Blob data packet. Why only one? Because the TPS is not that high, and carrying two would not fully use the capacity of each Blob and would incur extra storage costs, which is unnecessary.
As the volume of Rollup chain transactions increases, what if it needs to process 50 million transactions every day?
Compress the transaction volume of each batch as much as possible to allow for a large number of trades within Blob space
Increase the number of Blobs
Reduce the frequency of batch transactions
Due to the impact of the gas limit and storage cost on the data capacity of main network blocks, 128 Blobs in one slot block is the ideal state. Currently, Optimism only uses one every 2 minutes, creating room for layer 2 projects to increase TPS, expand market users, and grow the ecosystem.
Therefore, in the period after the Cancun upgrade, the number of Blobs used, frequency, and competitive use of Blob space by Rollup are not “rolled”.
The reason why Vitalik mentioned Blobscription is because this type of call can temporarily increase the demand for Blob usage due to the increased transaction volume, thereby increasing its volume. Using Blobscription as an example can provide a deeper understanding of the working mechanism of Blob. What Vitalik really wants to express is not directly related to Blobscription.
In theory, if a layer 2 project frequently and heavily batches transactions onto the main network and fills each Blob block every time, it will affect the normal use of Blob by other layer 2 projects unless it is willing to bear the high cost of forging batch transactions. However, at present, it’s like someone buying computing power to launch a 51% hard fork attack on BTC. It is theoretically feasible, but lacks practical motivation.
The introduction of Blob aims to reduce the burden on EVM and improve the operational capabilities of nodes, which is undoubtedly a tailor-made solution for Rollup. Obviously, it is not being used effectively at the moment, and the gas fees for layer 2 will be stable in a “lower” range for a long time, providing layer 2 developers with a long “golden” development period.
So, if the layer 2 market grows to a certain extent one day and there are a huge number of transactions batched to the main network every day, what if the current Blob data packets are not enough? Ethereum has already provided a solution: adopting the data availability sampling technology (DAS):
Simply put, the data that originally required a node to store can be distributed among multiple nodes at the same time. For example, each node stores only 1/8 of all Blob data, and 8 nodes form a small group to meet DA capabilities, which is equivalent to expanding the storage capacity of Blob by 8 times. This is also what needs to be done in the future Sharding phase.
But currently, Vitalik has repeatedly emphasized this, seemingly warning the layer 2 projects: don’t always complain that Ethereum’s DA capacity is expensive. You haven’t maximized the capabilities of the Blob data packets with your current TPS capacity. Hurry up and enhance the ecosystem, expand users, and transaction volume, don’t always think about escaping DA and launching a one-click chain.
Later, Vitalik added that in the current core rollup, only Arbitrum has reached Stage 1. Although DeGate, Fuel, and others have reached Stage 2, they are not widely known yet. Stage 2 is the ultimate goal of Rollup security, and very few Rollups have reached Stage 1, with the majority still in Stage 0. Vitalik’s worry about the development of the Rollup industry is evident.
In fact, in terms of just the scalability bottleneck problem, there is still a lot of room for improvement for Rollup layer 2 solutions.
Efficiently using Blob space through data compression. Currently, OP-Rollup has a dedicated compressor component to perform this work, and ZK-Rollup compresses SNARK/STARK proofs on-chain and submits them to the main network.
Lower layer2’s reliance on the main network as much as possible and only use optimistic proof technology to ensure L2 security in special circumstances. For example, most of the data in Plasma is on-chain, but deposit and withdrawal scenarios occur on the main network. Therefore, the main network can promise its security.
This means that layer2 should only consider associating with the main network for important operations such as deposits and withdrawals, thus reducing the main network’s burden and enhancing L2’s own performance. The parallel processing capability mentioned when discussing parallel EVM, filtering, classifying, and preprocessing a large number of transactions on-chain, and Metis’ implementation of a mixed Rollup, regular transactions use OP-Rollup while special withdrawal requests use ZK Route, all have similar considerations.
Overall, Vitalik’s article on the future scalability solutions for Ethereum is very inspiring. Especially for the current situation of layer 2 development, the optimistic affirmation of Blob performance space, and the prospects for future sharding technology. It even pointed out some directions for optimizing layer 2. The only uncertainty left for layer 2 is how to accelerate its development.