One article understands Arweave’s consensus mechanism iterative process

Author: Arweave Source: X,@Aarweaveoasis

Arweave Ecosystem has been considered one of the most valuable networks in decentralized storage tracks since its launch in 2018.But in a blink of an eye, because of its technical leading attributes, many people are familiar and unfamiliar with Arweave/AR.This article starts with the history of the technological development of Arweave since its establishment to enhance everyone’s in -depth understanding of Arweave.
Arweave has undergone more than ten main technical upgrades within 5 years. The core goal of its main iteration is to change from a computing -led mining mechanism to the storage mining mechanism.

>

Arweave 1.5: Main network startup
The main network of Arweave was launched on November 18, 2018.The size of the weaving network at the time was only 177 mIB.In some ways, ARWEAVE is similar to the present. The outlet time for 2 minutes, the upper limit of each block can accommodate is 1,000.In addition, there are more different aspects, such as the size limit of each transaction is only 5.8 mIB.And it uses a mining mechanism called Proof of Access.
Then the question is, what is the access certificate (POA)?

Simply put, it is to generate new blocks, and miners must prove that they can access other blocks in the history of the blockchain.Therefore, the access certificate will be randomly selected from the chain to choose a historical block, asking miners to put that historical block as a recalous block into the current block they tried to generate.And this will be the complete backup of this retrospective block.
At that time, the idea was that the miners did not have to store all blocks. As long as they could prove that they could access these blocks, they could participate in mining competition.(DMAC uses racing races as a metaphor in his video to facilitate understanding, here is cited.)
First of all, there is a finish line in this game. This terminal will move with the number of participants or mining speed to ensure that the game will end in about two minutes.This is why the two -minute block time.
Secondly, this competition is divided into two parts.
· The first part can be called the qualification competition, and miners must prove that they can access historical blocks.Once there is a designated block in your hand, you can enter the final.If the miner does not store the block, it doesn’t matter, they can also visit it from their peers, and they can also join the game.
· The second part is equivalent to the finals after the qualification match. This part is to use hash computing capabilities to mine in the way of workload. In essence, it consumes energy to calculate hash and eventually win the game.
Once a miner has crossed the finish line, the game is over, and the next game begins.The mining rewards are owned by a winner, which makes the mining extremely intense.As a result, Arweave has begun to grow rapidly.

>

Arweave 1.7: Randomx
The early Arweave principle was a very simple and easy -to -understand mechanism, but it didn’t take long for the researchers to realize a bad result that might occur.That is, the miners may adopt some unfavorable strategies to the Internet, and we call it a degeneration strategy.
Mainly because some miners must visit the blocks of others when they do not store the specified fast access block, which makes them one step slower than the miners who store the block and lose on the starting line.However, the solution is also very simple. As long as a large amount of GPU stacked, it can make up for this defect through large computing power and consumption of a large amount of energy. Therefore, they can even exceed the miners who store the storage blocks and maintain fast access.If this strategy becomes the mainstream, the miners will no longer store and share blocks, and replace it in continuous optimization of computing power equipment and consume a lot of energy to win the victory of competition.The final result will become a significant decline in the practicality of the network, and the data will gradually become centralized.This is obviously a degenerate departure for the storage network.

To solve this problem, the Arweave 1.7 version appeared.
The biggest feature of this version is the introduction of a mechanism called RandomX.It is a hash formula that is very difficult to run on the GPU or ASIC device, which makes the miners give up the stacking GPU computing power and use only the general CPU to participate in the hash computing competition.
Arweave 1.8/1.9: 10 MIB transaction size and SQL Lite
For miners, in addition to prove that they have the ability to access historical blocks, there are more important matters that need to be dealt with, that is, to deal with the transactions published by users to Arweave.
All new user transaction data must be packaged into the new block, which is the minimum request for a public chain.在Arweave 网络中,当一个用户向一个矿工提交一条交易数据时,这个矿工不仅会将数据打包进自己即将提交的区块中,还会将它分享给其它矿工,以此让所有矿工都能将This transaction data is packed into the blocks that are about to be submitted.Why do they do this?There are at least two reasons here:
· They are motivated to do it economically because each transaction data containing in the block will increase the reward of the block.Miners sharing transaction data from each other can ensure that no matter who win the right to get out, they can get the greatest rewards.
· Death spiral to prevent network development.If the user’s transaction data is often not packaged into the block, then the user will become less and less, the network will lose its value, and the benefits of the miners will become less. This is unwilling to see everyone.
So miners choose to maximize their own interests in this mutual benefit.But this brings a problem in data transmission, which has become a bottleneck of network scalability.The more transactions, the larger the block, and the 5.8 MIB transaction restrictions have not played a role.Therefore, Arweave increased the transaction size to 10 mIB through a hard fork, thus obtaining some relief.

>

But even so, the problem of transmitting bottlenecks has not been solved.Arweave is a global -distributed miner network, and all miners need to be synchronized.And the speed connection of each miner is different, which makes the network average connection speed.In order to allow the network to generate a new block every two minutes, the connection speed requires enough data to be uploaded in these two minutes.If the data uploaded by the user exceeds the average connection speed of the network, it will cause congestion and reduce the utility of the network.This will become a stumbling block for Arweave development.Therefore, the subsequent 1.9 update version uses infrastructure such as SQL LITE to improve the performance of the network.
Arweave 2.0: spoa
In March 2020, the update of Arweave 2.0 introduced two important updates to the network. As a result, it lifted the scalability of the network and broke the capacity limit for storing data on ArWEAVE.
The first update is a simple proof.This is based on the Merkel tree encryption structure. It enables miners to prove that they have stored all bytes in a block by providing a simple Merkel tree -based compressed branch path.The change it brings is that the miners only need to pack a simple proof of less than 1 kIB into the block, and no longer need to pack a retrospective block that may have 10 GIB.
The second update is “Format 2 Transaction”.This version optimizes the format of the transaction. The purpose is to lose weight to the data transmission blocks shared between nodes.Compared with the “Format 1 transaction”, the model of the transaction needs to be added to the data at the same time, and the “Format 2 Transaction” allows the transaction header and data to be separated, that is, in the information and data sharing transmission between the miner nodes.Except for the concise proof of the retrospective block, all transactions only need to add the transaction head to the block, and the transaction data can be added to the block after the competition.This will greatly reduce the transmission requirements during the trading in the block of the miner nodes.
The result of these updates is that it creates a lighter and easier to transmit a block than in the past, releasing excess bandwidth in the network.At this time, the miners will use these excess bandwidth to transmit data of “Format 2 Transaction” because these data will become a retrospective block in the future.As a result, scalability problem is solved.
Arweave 2.4: SPORA

So far, have all the problems in the Arweave network have been solved?The answer is obviously not.Another problem is derived from the new SPOA mechanism.
Mining strategies similar to miners stacking GPU computing power have appeared again.Although this time it is not the centralized computing power of the GPU stack, it brings a mainstream strategy that may be more centered.That is the emergence of quickly accessing the storage pool.All historical blocks are in these storage pools. When the access proof generates a random retrospective block, they can quickly generate proof, and then synchronize at a very fast speed between miners.
Although this does not seem to have much problem, data can still get enough backup and storage in such a strategy.But the problem is that this strategy will subtly change the focus of miners. Miners are no longer motivated to obtain high -speed access to data, because the transmission certificate has become very easy and fast, so they will put most of their energy into the workload certificateIn hash operations, not data storage.Isn’t this another form of degeneration strategy?

>

As a result, Arweave has undergone several functional upgrades, such as data index iteration, wallet list compression (Wallet list), and V1 version of transaction data migration.Finally ushered in another large version of iteration -SPORA, a simple proof of random access.
SPORA truly introduced Arweave to a new era, and passed the mechanism to iterate the miners’ attention from hash computing to data storage.
So, what is the difference between the simple proof of random access?
It first has two prerequisites,
· Indexed DataSet.Thanks to the INDEXING function in version 2.1, it uses a global offset to mark each data block (Chunk) in the weaving network so that each data block can be quickly accessible by this global offset.This brings Spora’s core mechanism -the continuous retrieval of data blocks.It is worth reminding that the data block mentioned here is the minimum data unit after the large file is divided, and its size is 256 kib.It is not the concept of block block.
Slow Hash.This hash is used to randomly select the candidate chunk.Thanks to the RANDOMX algorithm introduced by version 1.7, the miners cannot use the computing power stacking method, and they can only use the CPU for calculation.
Based on these two prerequisites, the SPORA mechanism has 4 steps
The first step is to generate a random number, and use the random number and the front block information to generate a slow hash through Randomx;
The second step, use this slow hash to calculate a single tracee byte (recall byte that is the global offset of the data block);
In the third step, the miner uses this tracement byte to find the corresponding data block from his storage space.If the miners do not store the data block, return to the first step and start again;

The fourth step is to use the first step to generate a fast hash with the data block just found;
In the fifth step, if the calculated hash results are greater than the current mining difficulty value, the mining and distribution of blocks are completed.On the contrary, return to the first step and start again.
Therefore, it can be seen that this greatly inspires miners to store data as much as possible on the hard disk that can be connected to their CPUs through a very fast bus instead of in the storage pool far from the sky.Complete the mining strategy to reverse from the computing orientation to the storage orientation.
Arweave 2.5: Packing and data surge
SPORA let miners start to store data crazy, because this is the lowest fruit that improves mining efficiency.What happened next?
Some smart miners realize that the bottlenecks under this mechanism can actually get data from the hard disk drive.The more data blocks obtained from the hard disk, the more simple proofs that can be calculated, the more hash operations that can be executed, and the higher the chance of digging the mine.
So if the miners spend ten times the cost on the hard disk driver, for example, using SSDs with a faster reading and writing speed to store data, the hash ability of the miner will be ten times higher.Of course, this will also appear in arms competition similar to the GPU computing power.Faster storage forms, such as RAM drives such as RAM drive, will also appear with a faster transmission speed.But this depends entirely on the input -output ratio.
At present, the fastest speed of miners to generate hash is the reading and writing speed of a SSD hard disk, which sets a lower limit for energy consumption similar to POW mode, which is more environmentally friendly.
Is this perfect?Of course not.Technical staff believe that it can be done better on this basis.
In order to upload a large amount of data, Arweave 2.5 introduced the data bundle package mechanism.Although this is not a true protocol upgrade, it has always been an important part of scalability planning, which has caused the size of the network to explode.Because it breaks the upper limit of 1,000 transactions we mentioned at the beginning.Data bundle bags only occupy one of these 1,000 transactions.This laid the foundation for Arweave 2.6.

>

Arweave 2.6
Arweave 2.6 is a major version upgrade since SPORA.On the basis of the previous basis, it has taken a step towards its vision, making Arweave mining lower costs to promote more decentralized miners.
What is the difference between it?Due to the length of length, only a simple introduction here, in the future, will more specifically interpret the mechanism design of Arweave 2.6.
Simple understanding, Arweave 2.6 is the speed limit version of SPORA. It introduced a verifiable encryption clock that can be titted every second for SPORA. We call it Hash Chain.
· It will produce a mining hash every time,
· Miners choose an index of the data partition they stored to participate in mining,
· Combined with this mining hash and partition indexes, a retrospective range can be generated in the selected stored data partition selected by the miners. This retrospective scope includes 400 retrospective blocks. These are the retrospective blocks that miners can be used for mining.In addition to this retrospective range, it will also be randomly reappeared into a retrospective range 2 in weaving. If the miners store enough data partitions, they can get this range 2, that is, another 400 back retrospective blocksMining opportunities will increase the chance of winning.This is very good to inspire miners to store enough copy of the data partition.
· Miners use data blocks within the retrospective range to test them one by one. If the result is greater than the current difficulty of the network, it will win mining rights. If not satisfied, use the next data block test.

>

This means that the largest number of hash will be generated per second is fixed, and the 2.6 version controls this quantity within the range where the performance of ordinary mechanical hard disks can also be processed.This has made the maximum speed of the SSD hard disk driver up to thousands of thousands of hash per second to become a furnishings, and can only compete with mechanical hard disks at a speed of several hundred hash per second.This is like a Lamborghini and a Toyota Prius competing in a 60 -kilometer per hour. Lamborghini’s advantage was greatly restricted.Therefore, the largest contribution to mining performance is the number of miners stored in data sets.
The above are some important iterative milestones in the development process of Arweave.From POA to SPOA to Spora to ARWEAVE 2.6 SPORA, it always follows the original vision.On December 26, 2023, Arweave officially released a white paper version 2.7, which made a lot of adjustments on the basis of these mechanisms to evolve the consensus mechanism to SPORES simple replication certificate.

  • Related Posts

    Binance launches Alpha points, understand all the rules

    Jessy, bitchain vision The requirements for participating in Binance Wallet TGE are getting higher and higher! Previously, the popularity of Binance Wallet’s exclusive TGE brought a large amount of data…

    Bankless: What are the decentralized content creation platforms worth paying attention to?

    Author: William M. Peaster, Bankless; compiled by: Tao Zhu, Bitchain Vision I have been writing in the field of crypto since 2017.Since then, I have turned writing into a career…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Conversation Vitalik Buterin: Fusaka upgrade is planned in the second half of the year

    • By jakiro
    • April 28, 2025
    • 3 views
    Conversation Vitalik Buterin: Fusaka upgrade is planned in the second half of the year

    Three reasons why Ethereum is in trouble

    • By jakiro
    • April 28, 2025
    • 12 views
    Three reasons why Ethereum is in trouble

    Can altcoin ETFs avoid the fate of Ethereum ETFs?

    • By jakiro
    • April 27, 2025
    • 13 views
    Can altcoin ETFs avoid the fate of Ethereum ETFs?

    a16z: Stablecoin Guide

    • By jakiro
    • April 27, 2025
    • 15 views
    a16z: Stablecoin Guide

    What is a reciprocal tariff?How does it affect the crypto industry?

    • By jakiro
    • April 27, 2025
    • 22 views
    What is a reciprocal tariff?How does it affect the crypto industry?

    Web3 Entertainment New Era: How Short Shows Unlock Personal Influence Growth Password

    • By jakiro
    • April 26, 2025
    • 32 views
    Web3 Entertainment New Era: How Short Shows Unlock Personal Influence Growth Password
    Home
    News
    School
    Search