Vitalik: The Possible Future of the Ethereum Protocol—The Verge

Author: Vitalik, founder of Ethereum; compiled by Deng Tong, Bitchain Vision

Note: This article is the fourth part of the series of articles recently published by Vitalik, founder of Ethereum.Possible futures of the Ethereum protocol, part 4: The Verge”. See “Vitalik: Key objectives of the Ethereum The Scourge phase”, see the second part “Vitalik: How should the Ethereum protocol develop in the Surge stage”, see the first part”What else can be improved on Ethereum PoS”. The following is the full text of the fourth part:


Special thanks to Justin Drake, Hsiao-wei Wang, Guillaume Ballet, Ignacio, Josh Rudolf, Lev Soukhanov, Ryan Sean Adams and Uma Roy for their feedback and review.

One of the most powerful features of blockchain is that anyone can run nodes on their own computers and verify that the chain is correct.Even if 95% of nodes running on-chain consensus (PoW, PoS…) immediately agree to change the rules and start producing blocks according to the new rules, everyone running a fully validated node will refuse to accept the chain.Stakeholders who do not belong to such groups will automatically converge and continue to build a chain that continues to follow the old rules, and fully verified users will follow the chain.

This is the key difference between blockchain and centralized systems.However, to maintain this feature, running fully validated nodes needs to be actually feasible for a critical number of people.This applies to both stakers (as if stakers don’t have a verification chain, they don’t actually contribute to the enforcement of protocol rules) and to ordinary users.Today, nodes can be run on consumer laptops, including those used to write this article, but it is difficult to do so.The Verge aims to change this, making the computing cost of a full verification chain so low that every mobile wallet, browser wallet, and even smartwatch do it by default.

The Verge, 2023 roadmap.

Initially, “Verge” refers to the idea of ​​transferring Ethereum state storage to Verkle trees – a tree structure that allows for more compact proofs that enable stateless verification of Ethereum blocks.Nodes can verify Ethereum blocks without any Ethereum state (account balance, contract code, storage…) on their hard drive, but at the cost of spending hundreds of kilobytes of proof data and hundreds of milliseconds of extra timeTo verify the proof.Today, Verge represents a larger vision focused on achieving maximum resource efficiency verification of the Ethereum chain, which includes not only stateless verification technology, but also validating all Ethereum executions using SNARK.

In addition to long-term focus on SNARK validation of the entire chain, another new question is related to whether Verkle trees are the best technology.Verkle trees are vulnerable to quantum computers, so if we replace the current KECCAK Merkle Patricia trees with Verkle trees, we will have to replace these trees again later.A natural alternative to Merkle trees is to jump directly to using the STARK Merkle branch in the binary tree.Historically, this has been considered unfeasible due to overhead and technical complexity.However, recently, we have seen Starkware proved 1.7 million Poseidon hashs per second on laptops with Circle STARK, and the proof time for more “traditional” hashs is also rapidly shrinking due to technologies such as GKR..

So, over the past year, Verge has become more open and there are multiple possibilities.

The Verge: Key Objectives

  • Stateless Client: Fully Verify that the client and the staked node do not require more than a few GB of storage space.

  • (Long-term) Comprehensive verification chain (consensus and execution) on smartwatches.Download some data, verify SNARK, and finish.

In this article, the following content is highlighted:

  • Stateless Verification: Verkle or STARKs

  • Proof of validity of EVM execution

  • Proof of validity of consensus

Stateless Verification: Verkle or STARKs

What problems do we want to solve?

Today, Ethereum clients need to store hundreds of GB of state data to verify blocks, and this number continues to increase every year.The raw state data increases by about 30 GB per year, and personal clients must store some additional data in addition to being able to effectively update tries.

This reduces the number of users who can run fully verified Ethereum nodes: Although the hard drive is large enough to store all Ethereum state and even years of history, computers people buy by default tend to have only a few hundred gigabytes of storage space..The state size can also cause great trouble to the process of setting up a node for the first time: the node needs to download the entire state, which can take hours or days.This will produce various chain reactions.For example, this makes it more difficult for stakers to upgrade their stake settings.Technically, this can be done without a downtime – starting a new client, waiting for it to sync, then closing the old client and transferring the key – but in reality, this is technically complex.

What is it and how does it work?

Stateless verification is a technique that allows nodes to verify blocks without having a full state.Instead, each block comes with a witness, which includes (i) values ​​at a specific location in the state the block will access (e.g. code, balance, storage), and (ii) the cryptographic proof of these values ​​is correctof.

The actual implementation of stateless verification requires changing the Ethereum state tree structure.This is because the current Merkle Patricia tree is extremely unfriendly to implement any proof of password scheme, especially in the worst case scenario.This is true for the “original” Merkle branch and the possibility of “wrapping” the Merkle branch in STARK.The main difficulties stem from two weaknesses in MPT:

  • It is a hexagram (i.e., each node has 16 child nodes).This means that on average, the proof in a tree of size N has 32*(16-1)*log16(N) = 120*log2(N) bytes, or about 3840 bytes in a 232-item tree.For binary trees, only 32*(2-1)*log2(N) = 32*log2(N) bytes are needed, i.e. about 1024 bytes.

  • This code is not Merkelized.This means that any access to the account code is required to provide the complete code, with a maximum length of 24,000 bytes.

We can calculate the worst case as follows:

30,000,000 Gas / 2,400 (“cold” account read cost)* (5 * 480 + 24,000) = 330,000,000 bytes

Branch costs are slightly lower (5*480 instead of 8*480), because when there are many branches, the top of the branch will be repeated.But even so, the amount of data downloaded in one slot is completely unrealistic.If we try to wrap it in STARK, we have two problems: (i) KECCAK is not friendly relative to STARK, (ii) 330 MB of data means we have to prove 5 million calls to the KECCAK round function, whichIt’s a way even if we can make STARK prove KECCAK more efficient, there are a lot more things that can’t be proved on all hardware besides the most powerful consumer hardware.

If we just replace the hexagram with a binary tree and add the Merkelize code, the worst case will be about 30,000,000 / 2,400 * 32 * (32 – 14 + 8) = 10,400,000 bytes ~ 214 branches, 8 of which areproof of length entering a leaf).Note that this requires changing the Gas cost to charge for accessing each individual code block; this is what EIP-4762 does.10.4 MB is much better, but for many nodes, there is still too much data to download in one slot.So we need to introduce some more powerful technologies.To do this, there are two main solutions: the Verkle tree and the Starked binary hash tree.

Verkle Tree

Verkle trees use vector commitments based on elliptic curves to make shorter proofs.The key is that no matter the width of the tree, the corresponding proof fragment for each parent-child relationship is only 32 bytes.The only limitation of tree width is that if the tree is too wide, the proof’s computational efficiency will be reduced.Ethereum’s recommended implementation width is 256.

Therefore, the size of a single branch in the proof becomes 32*log256(N) = 4*log2(N) bytes.Theoretically the maximum proof size becomes about 30,000,000/2,400*32*(32-14+8)/8 = 1,300,000 bytes (mathematical calculations are slightly different in practice due to uneven distribution of state blocks, but this is the first onevery good).

As an additional warning, note that in all the examples above, this “worst case” is not the worst case: the worse case is that the attacker deliberately “mines” two addresses to have a long public in the treePrefix, and read from one of them, this can extend the worst-case branch length by about 2 times.But even with such a warning, the Verkle tree gives us about 2.6 MB of worst-case proof, which roughly matches today’s worst-case call data.

We also use this warning to do another thing: we make access to “adjacent” storage very cheap: many code blocks of the same contract, or adjacent storage slots.EIP-4762 provides the definition of adjacency, and adjacency access is only charged a 200 Gas fee.For adjacent access, the worst-case proof size becomes 30,000,000/200*32 = 4,800,800 bytes, which is still roughly within the tolerance range.If we want to reduce this value for security, we can slightly increase the cost of adjacent access.

Starred binary hash tree

The technique here is very self-explanatory: you make a binary tree, take the max-10.4-MB proof that you need to prove the value in a block, and replace the proof with the STARK of that proof.This leads us to find that the proof itself contains only the data that is proven, plus the fixed overhead of the actual STARK.

The main challenge here is to prove time.We can do basically the same calculation as above, except that we do not calculate the bytes, but calculate the hash value.A 10.4 MB block means 330,000 hashes.If we add the possibility that an attacker “mines” the address with a long public prefix in the tree, the real worst case will become about 660,000 hashes.So if we can prove about 200,000 hashes per second, then that’s fine.

These numbers have been reached on consumer laptops with a Poseidon hash function designed specifically for STARK friendliness.However, Poseidon is relatively immature, so many people still don’t trust its security.Therefore, there are two realistic paths:

  • Quickly perform a lot of security analysis on Poseidon and be familiar with it to deploy it on L1

  • Use a more “conservative” hash function, such as SHA256 or BLAKE

At the time of writing, Starkware’s STARK prover can only prove about 10-30k hash per second on a consumer laptop if he wants to prove a conservative hash function.However, STARK technology is advancing rapidly.Even today, GKR-based technology shows potential to increase it to the approximately 100-200k range.

Witness use cases other than verification blocks

In addition to the verification block, there are three other key use cases that enable more efficient stateless verification:

  • Memory pool:When a transaction is broadcasted, nodes in the p2p network need to verify that the transaction is valid before rebroadcasting.Today, verification includes not only verifying the signature, but also checking that the balance is sufficient and that the random number is correct.In the future (for example, using native account abstractions, such as EIP-7701), this may involve running some EVM code that performs some state access.If the node is stateless, the transaction will require proof with a proof of the state object.

  • Include list:This is a proposed feature that allows (possibly small and uncomplicated) proof-of-stake validators to force the next block to contain transactions regardless of the (possibly large and complex) block builder’s wishes.This will reduce the ability of powerful participants to manipulate blockchains by delaying transactions.However, this requires that validators have a way to verify the validity of transactions in the included list.

  • Light client:If we want users to access the chain through wallets (e.g. Metamask, Rainbow, Rabby…) without trusting centralized participants, they need to run a light client (e.g. Helios).The core Helios module provides users with a validated status root.However, in order to gain a fully trustless experience, users need to provide proof for each individual RPC call they make (for example, for an eth_call request, users need proof of all states accessed during the call);

One thing that all these use cases are common is that they require quite a lot of proofs, but each proof is small.Therefore, STARK proves that it doesn’t actually make sense to them; instead, it’s most realistic to use the Merkle branch directly.Another advantage of Merkle branches is that they are updating: Given a proof of state object X rooted in block B, you can update that proof if you receive a subblock B2 with its witness.to make it take block B2 as the root.The Verkle proof itself is also up-to-date.

What are the connections to existing research?

Verkle Tree:https://vitalik.eth.limo/general/2021/06/18/verkle.html

John Kuszmaul’s original Verkle tree paper:https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdf

Starkware proof data:https://x.com/StarkWareLtd/status/1807776563188162562

Poseidon2 paper:https://eprint.iacr.org/2023/323

Ajtai (alternative fast hash function based on grid hardness):https://www.wisdom.weizmann.ac.il/~oded/COL/cfh.pdf

Verkle.info:https://verkle.info/

What else needs to be done and what trade-offs need to be weighed?

The remaining main tasks to be done are:

  • More analysis of the consequences of EIP-4762 (stateless gas cost changes)

  • More work completion and testing transition procedures, which is a large part of any stateless EIP complexity

  • More security analysis of Poseidon, Ajtai, and other “STARK-friendly” hash functions

  • Further develop ultra-efficient STARK protocols for “conservative” (or “traditional”) hash functions, such as the idea based on Binius or GKR.

We will soon make a decision point to choose which of the following three options: (i) Verkle tree, (ii) STARK-friendly hash function, and (iii) conservative hash function.Their properties can be summarized roughly as follows:

In addition to these “title numbers”, there are some other important considerations:

  • now,Verkle tree code is quite mature.Using anything other than Verkle will actually delay deployment, most likely through hard forks.This may be OK, especially if we need extra time to do a hash function analysis or prover implementation anyway, and if we have other important features that we want to include in Ethereum as soon as possible.

  • Update the state root with hash is faster than using Verkle trees.This means that the hash-based approach can shorten the synchronization time of the full node.

  • VThe erkle tree has interesting witness update properties– Verkle Tree Witness is updating.This property is useful for memory pools, include lists, and other use cases, and it may also help to achieve efficiency: If you update the state object, you can update the witness at the penultimate level without even reading the last level.

  • Verkle trees are harder to prove by SNARK.If we want to reduce the proof size all the way to a few kilobytes, Verkle proofs can bring some difficulties.This is because Verkle proof verification introduces a large number of 256-bit operations, which requires that the proof system either has a lot of overhead or itself has a custom internal structure that contains the 256-bit part for Verkle proof.

Another possible approach is the mesh-based Merkle tree if we want the updateability witnessed by Verkle to be done in a quantum-safe and fairly efficient way.

If it is proved that the system is not effective enough in the worst case, we can make up for this shortcoming with another “rabbit in the hat”, that is multidimensional Gas: access to (i) calldata, (ii) computing, (iii) stateAnd possibly other different resources have separate Gas restrictions.Multidimensional Gas adds complexity, but in exchange it more strictly limits the ratio between the average and the worst-case scenario.For multidimensional Gas, the theoretical maximum number of branches to prove may be reduced from 30,000,000 / 2400 = 12,500 to 3000.This will make BLAKE3 (barely) sufficient even today, without further proof improvements.

Multidimensional Gas allows block resource limitations to replicate the underlying hardware resource limitations closer to the resource limitations.

Another “breath that pops up” is the proposal to delay the state root calculation until after the block.This will give us a full 12 seconds to calculate the state root, which means that even in the most extreme cases, only about 60,000 hash/second proof time is enough, which again puts us in BLAKE3 barely enoughWithin the range of use.

The downside of this approach is that it increases light client latency by one period, although there are smarter versions of the technology that can reduce this latency to just proof generation latency.For example, as long as any node generates a proof, the proof can be broadcast on the network instead of waiting for the next block.

How does it interact with the rest of the roadmap?

Solving the stateless problem greatly increases the convenience of separate pledges.This will become more valuable if technologies that can reduce the minimum balance of individual stakes, such as Orbit SSF or application-layer policies, such as squad stakes, become available.

Multidimensional Gas will be easier if EOF is introduced at the same time.This is because a key complexity of multidimensional Gas for execution is to handle child calls that do not pass the parent call to the full Gas, and EOF simply makes such a child call illegal (and the native account abstraction will provide an in-currentSome Gas sub-call protocol alternatives for main use cases).

Another important synergy is the synergy between stateless verification and historical expiration.Today, customers must store nearly terabytes of historical data; this data is several times larger than national data.Even if the client is stateless, the client’s dream of almost no storage can not be realized unless we can also alleviate the responsibility of the client’s storage history.The first step in this regard is EIP-4444, which also means storing historical data in a torrent or Portal network.

Proof of validity of EVM execution

What problems do we want to solve?

The long-term goal of Ethereum block verification is clear: you should be able to verify an Ethereum block by: (i) downloading the block, maybe even just a small portion of the block and sampling data availability, and (ii) Verify a small portion to prove that the block is valid.This will be an operation with minimal resource consumption that can be done in another chain on the mobile client, within the browser wallet, or even (without data availability section).

To achieve this, SNARK or STARK proofs are required with (i) consensus layer (i.e. proof of stake) and (ii) execution layer (i.e. EVM).The former is a challenge in itself and should be addressed in the process of further improving the consensus layer (e.g., single-slot finality).The latter requires proof of EVM execution.

What is it and how does it work?

Formally, in the Ethereum specification, EVM is defined as a state transition function: you have some pre-state S, a block B, and you are calculating a post-state S’ = STF(S, B).If the user is using a light client, they don’t have S and S’, or even full B; instead, they have a pre-state root R, a post-state root R’, and a block hash H.The complete statement that needs to be proved is approximately:

  • Public input:The previous state root R, the latter state root R’, and the block hash H.

  • Private input:Block B, Objects in the state accessed by block Q, the same object after block Q’ was executed, state proof (such as Merkle branch) P.

  • Proposition 1:P is a valid proof that Q contains some parts of the state represented by R.

  • Proposition 2:If you run STF on Q, (i) execute access only objects inside Q, (ii) the block is valid, and (iii) the result is Q’ .

  • Proposition 3:If you recalculate the new state root using the information in Q’ and P, you will get R’ .

If it exists, you can have a light client that can fully verify Ethereum EVM execution.This makes the client’s resources quite small.To get a true fully validated Ethereum client, you also need to do the same for the consensus party.

The implementation of the validity proof of EVM computing already exists and is widely used by L2.However, there is still much work to be done to make the EVM validity proof work applicable to L1.

What are the connections to existing research?

EC PSE ZK-EVM (now deprecated because there are better options):https://github.com/privacy-scaling-explorations/zkevm- Circuits

Zeth, which works by compiling EVM into RISC-0 ZK-VM:https://github.com/risc0/zeth

ZK-EVM formal verification project:https://verified-zkevm.org/

What else needs to be done and what trade-offs need to be weighed?

now,The validity proof of EVM is not sufficient in two ways: security and proof time.

The proof of security effectiveness needs to ensure that SNARK does verify the EVM calculations and that there are no errors in it.The two main techniques for improving security are multiple proverbs and formal verification.Multi-proven means having multiple independently written proof of validity implementations, just like having multiple clients, and if a large enough subset of these implementations proves a block, the client accepts that block.Formal validation involves using tools commonly used to prove mathematical theorems (such as Lean4) to prove that validity proofs accept only the input that is correctly executed by the underlying EVM specification written in Python.

Fast enough proof time means any Ethereum block can be proven in less than 4 seconds.Today, we are still far from this goal, even though we are much closer than we thought two years ago.To achieve this, we need to advance in three aspects:

  • Parallelization – Today’s fastest EVM prover can prove the average Ethereum block in about 15 seconds.It does this by parallelizing hundreds of GPUs and then finally putting their work together.In theory, we know exactly how to make an EVM prover that can prove the calculation in O(log(N)) time: let a GPU perform each step and then perform the “aggregation tree”:

There are challenges in implementing this.Even in the worst case (i.e., very large transactions occupy the entire block), the calculated split cannot be done by transaction; it must be per opcode (EVM or underlying VM (such as RISC-V)).A key implementation challenge that makes this not entirely trivial is the need to ensure that the VM’s “memory” is consistent across different parts of the proof.However, if we can do this recursive proof, then we know that at least the prover delay problem has been solved, even if there is no improvement on any other axis.

  • Proof System Optimization—New proof systems such as Orion, Binius, GKR may result in a significant reduction in proof time for general-computed.

  • EVM’s Gas consumes other changes —Many things in EVM can be optimized to make them more prover friendly, especially in the worst case scenario.If an attacker can build a block that takes ten minutes for the prover, it is not enough to prove an average Ethereum block in 4 seconds.The required EVM changes can be divided into two categories:

  • Gas Cost Changes —If an operation takes a long time to prove, it should have a higher Gas cost even if the calculation is relatively fast.EIP-7667 is a proposed EIP to deal with the most serious problem in this regard: it significantly increases the cost of Gas as relatively cheap opcodes and precompiling exposed (traditional) hash functions.To compensate for these increased Gas costs, we can reduce the Gas costs of proving relatively cheap EVM opcodes, thus keeping the average throughput the same.

  • Data structure replacement —In addition to replacing the state tree with an alternative that is more suitable for STARK, we also need to replace transaction lists, receipt trees, and other structures that prove expensive.Ethan Kissling’s EIP moves the transaction and receipt structure to SSZ ([1] [2] [3]), a step in this direction.

In addition, the two “rabbits coming out of the hat” mentioned in the previous section (multi-dimensional Gas and delayed state roots) can also help here.It is worth noting, however, that unlike stateless verification, stateless verification means we have enough technology to do what we need today, and even with these technologies, a full ZK-EVM verification will require more work——It just requires less work.

One thing not mentioned above is the proof hardware: generate proof faster using GPUs, FPGAs, and ASICs.Fabric Cryptography, Cysic and Accseal are the promoters of these three companies.This is very valuable for tier 2, but it is unlikely to be a decisive consideration for tier 1, as there is a strong desire to keep tier 1 highly decentralized, which means that proof generation must be in a considerable subsetwithin the scope of ability.Ethereum users should not encounter bottlenecks in the hardware of a single company.Layer 2 allows for more radical trade-offs.

There is more work to be done in these areas:

  • Parallel proof requires a proof system where different parts of the proof can be “shared memory” (such as lookup tables).We know the techniques to do this, but they need to be implemented.

  • We need more analysis to find the ideal set of Gas cost variations to minimize the worst-case proof time.

  • We need to do more on the proof system

Possible trade-offs here include:

  • Security and prover time:Active choices using hash functions, proof systems with more complex or more active security assumptions, or other design choices may reduce prover time.

  • Decentralized and prover time:The community needs to agree on the “normative” of its target prover hardware.Is it okay to ask the proof to be a large-scale entity?Do we hope that high-end consumer laptops can prove the Ethereum block in 4 seconds?Something in between?

  • The degree to which backward compatibility is broken:Inadequacies in other areas can be compensated by making more aggressive Gas cost changes, but this is more likely to increase the cost of certain applications disproportionately and force developers to rewrite and redeploy code in order to keep them economically viable.Similarly, the “rabbit in the hat” also has its own complexity and disadvantages.

How does it interact with the rest of the roadmap?

The core technologies required to implement EVM validity proof at Layer 1 are widely shared with two other areas:

  • Proof of validity of layer 2 (i.e. “ZK rollups”)

  • “STARK Binary Hash Proof” stateless method

Successful implementation of validity at Tier 1 proves that ultimately easy separate staking: even the weakest computers, including mobile phones or smartwatches, can be staking.This further increases the value of addressing other restrictions for staking individually (e.g., minimum 32 ETH).

Furthermore, the EVM validity of L1 proved to significantly increase the Gas limit of L1.

Proof of validity of consensus

What problems do we want to solve?

If we want to be able to fully verify Ethereum blocks with SNARK, then EVM execution is not the only part we need to prove.We also need to prove consensus: the parts of the system that handle deposits, withdrawals, signatures, validator balance updates, and other elements of the Ethereum proof of stake section.

Consensus is much simpler than EVM, but the challenge it faces is that we don’t have a Layer 2 EVM summary, which is why most work needs to be done anyway.Therefore, any implementation of proof Ethereum consensus needs to be done “from scratch”, although the proof system itself is a shared work that can be built on it.

What is it and how does it work?

A beacon chain is defined as a state transition function, just like an EVM.The state transfer function is determined by three factors:

  • ECADD (for verification of BLS signatures)

  • Pairing (for verification of BLS signatures)

  • SHA256 hash (for reading and updating status)

In each block, we need to prove 1-16 BLS12-381 ECADDs for each validator (probably more than one, as the signature can be included in multiple aggregates).This can be compensated by subset precomputing techniques, so we can say that each validator is a BLS12-381 ECADD.Today, there are about 30,000 validators signatures per period.In the future, for single slot finality, this may change in either direction(See the instructions here): If we take the “strong” route, each slot may increase to 1 million validators.Meanwhile, with Orbit SSF, it will stay at 32,768, or even reduce to 8,1.

How BLS aggregation works.Verifying the aggregate signature requires only ECADD for each participant, not ECMUL.But there are still a lot to prove at 30,000 ECADDs.

For pairing, there are currently up to 128 proofs per slot, which means 128 pairs need to be verified.With EIP-7549 and further changes, this may be reduced to 16 per slot.The number of pairs is small, but the cost is extremely high: each pair runs (or proves) thousands of times longer than ECADD.

A major challenge in the proof BLS12-381 operation is the lack of convenient curves, whose curve order is equal to the BLS12-381 field size, which adds considerable overhead to any proof system.On the other hand, the Verkle tree proposed for Ethereum is built with Bandersnatch curves, which makes BLS12-381 itself a natural curve used in the SNARK system to prove Verkle branches.A fairly simple implementation can provide about 100 G1 additions per second; it almost certainly requires clever techniques like GKR to prove it quickly enough.

For SHA256 hash, the worst case scenario is the epoch conversion block, where the entire validator short balance tree and a large number of validator balances are updated.Validator Short Balance Tree There is only one byte per validator, so about 1 MB of data will be rehashed.This is equivalent to 32,768 SHA256 calls.If the balance of one thousand validators is above or below the threshold, the valid balance in the validator record needs to be updated, which corresponds to one thousand Merkle branches, so there may be ten thousand hashes.The shuffling mechanism requires 90 bits per validator (and therefore 11 MB of data), but this can be calculated at any time in an epoch process.For single slot finality, these numbers may increase or decrease again according to the details.A shuffle becomes unnecessary, although the track may somehow restore the need for a shuffle.

Another challenge is that you need to read all validator states (including public keys) to verify the block.For 1 million validators, plus the Merkle branch, 48 million bytes are required to read the public key alone.This requires millions of hashes per epoch.If we have to prove proof of stake verification today, then a realistic approach is some form of incremental verifiable computing: a separate data structure is stored in the proof system that is optimized for efficient lookups and provides theUpdates to this structure.

All in all, there are many challenges.

The most efficient solution to these challenges is likely to require a deep redesign of the beacon chain, which may occur simultaneously with the switching to single-slot finality.Features of this redesign may include:

  • Hash function changes:Today, the “full” SHA256 hash function is used, so because of the padding, each call corresponds to two underlying compressed function calls.At least, we can get 2x gain by switching to the SHA256 compression function.If we use Poseidon instead, we can get about 100x gain, which completely solves all of our problems (at least for hashes): 1.7 million hashes per second (54 MB) and even “read a million”verifier records” “can prove it in seconds.

  • If it is Orbit, the disrupted verifier record will be stored directly:If you choose a certain number of validators (e.g. 8,192 or 32,768) as the committee for a given slot, put them directly into the state adjacent to each other so that the minimum hash is required to read all validator public keys into the proof.This will also enable all balance updates to be completed efficiently.

  • Signature aggregation:Any high-performance signature aggregation scheme will actually involve some kind of recursive proof, where the intermediate proof of the subset of signatures will be performed by individual nodes in the network.This naturally distributes the proof load across many nodes in the network, thus making the “final prover” much less work.

  • Other signature schemes:For Lamport+Merkle signatures, we need 256+32 hashes to verify the signature; multiply by 32,768 signers to get 9,437,184 hashes.Optimization of the signature scheme can be further improved by a small constant factor.If we use Poseidon, this is within the scope of the proof within a single slot.But in reality, with a recursive aggregation scheme, this gets faster.

What are the connections to existing research?

Concise, Ethereum Consensus Proof (Synchronous Committee Only):https://github.com/succinctlabs/eth-proof-of-consensus

Simple, Helios in SP1:https://github.com/succinctlabs/sp1-helios

Concise BLS12-381 precompiled:https://blog.succinct.xyz/succinctshipsprecompiles/

Halo2-based BLS aggregation signature verification:https://ethresear.ch/t/zkpos-with-halo2-pairing-for-verifying-aggregate-bls-signatures/14671

What else needs to be done and what trade-offs need to be weighed?

In fact, it will take years to get the validity of the Ethereum consensus.This is roughly the same timeline as we need to implement single-slot finality, tracks, changes to signature algorithms, and potential security analysis to have enough confidence to use a “radical” hash function like Poseidon.Therefore, it makes the most sense to solve these other problems and keep STARK friendly in mind when doing these tasks.

The main trade-off is likely to be in the order of operations, between a more progressive approach to reforming the Ethereum consensus layer and a more radical approach to “make many changes at once”.For EVM, the incremental approach makes sense because it minimizes damage to backward compatibility.For the consensus layer, backward compatibility issues are less problematic and more “full” rethinking the various details of how beacon chains are built to best optimize SNARK friendliness.

How does it interact with the rest of the roadmap?

STARK friendly needs to be the main focus of long-term redesign of Ethereum proof-of-stake consensus, most notably single-slot finality, Orbit, signature scheme changes and signature aggregation.

  • Related Posts

    Ethereum’s crossroads: a strategic breakthrough in reconstructing the L2 ecosystem

    Author: Momir @IOSG TL;DR The craze of Web3 vision has faded in 2021, and Ethereum is facing severe challenges.Not only is the market’s cognitive shift in Web3.0, Ethereum is also…

    Ethereum is brewing a deep technological change led by ZK technology

    Author: Haotian A friend asked me what I think @VitalikButerin proposed an aggressive solution to replace Ethereum virtual machine EVM bytecode with an open source RISC-V instruction set architecture?Ethereum is…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    On the “Pattern” of Digital City-State

    • By jakiro
    • April 21, 2025
    • 10 views
    On the “Pattern” of Digital City-State

    After the tariff war: How global capital rebalancing will affect Bitcoin

    • By jakiro
    • April 21, 2025
    • 5 views
    After the tariff war: How global capital rebalancing will affect Bitcoin

    Ethereum’s crossroads: a strategic breakthrough in reconstructing the L2 ecosystem

    • By jakiro
    • April 21, 2025
    • 6 views
    Ethereum’s crossroads: a strategic breakthrough in reconstructing the L2 ecosystem

    Ethereum is brewing a deep technological change led by ZK technology

    • By jakiro
    • April 21, 2025
    • 14 views
    Ethereum is brewing a deep technological change led by ZK technology

    BTC 2025 Q3 Outlook: When will the crypto market top again?

    • By jakiro
    • April 21, 2025
    • 4 views
    BTC 2025 Q3 Outlook: When will the crypto market top again?

    Is Base “stealing” Ethereum’s GDP?

    • By jakiro
    • April 21, 2025
    • 10 views
    Is Base “stealing” Ethereum’s GDP?
    Home
    News
    School
    Search