ARWEAVE 17th Edition White Paper Interpret

Author: Gerry Wang @ Arweave Oasis, the original first was first released on @Aarweaveoasis Twitter

exist

in:

Bi = Arweave network block index block index;

800*n_p = Each checkpoint is unlocked at a maximum of 800 hash. N_P is the number of partitions stored in 3.6 TB.Essence

D = the difficulty of the network.

A successful and effective proof is those proofs greater than the difficulty, and this difficulty will be adjusted over time to ensure that an average of a block is dug out every 120 seconds.If the time difference between block i and block (i+10) is T, then the adjustment from old difficulty D_i to new difficulty D_ {i+10} is calculated as follows:

in:

Formula annotation:It can be seen from the above two formulas that the adjustment of network difficulty depends mainly on parameter R, and R means that the actual blocks need time to generate the offset parameter of the standard time of the standard time of the 120 seconds of the block.

The difficulty of the new calculation determines that the spoa proof of each generation is based on the probability of the success of the block, which is as follows:

Formula annotation:After the above derivation, the probability of mining success in new difficulty is the probability of success under the difficulty of the old difficulty multiplied by the parameter R.

Similarly, the difficulty of VDF will be re -calculated, the purpose is to maintain the checkpoint cycle to occur once per second in time.

The incentive mechanism of a complete copy

Arweave to generate each block based on the Spores mechanism is based on such a hypothesis:

Under the inspiration, whether it is individual miners or group cooperation with miners, it will be implemented by maintaining complete data copies as the best strategy for mining.

In the SPORES game introduced earlier, the number of SPOA hash release released by the two copies of the same part of the storage data set is the same as the complete copy of the storage of the entire data set, which leaves the possibility of speculative behavior for miners.So when Arweave was in actual deployment of this mechanism, it made some modifications. The agreement was divided into two parts through the number of SPOA challenges unlocked per second:

  • Part of the partition in the partition stored by miners to release a certain number of SPOA challenges;

  • Another part is to randomly specify a partition in all Arweave to release a SPOA challenge. If the miners do not store a copy of this partition, they will lose the number of challenges in this part.

You may feel a little puzzled here, what is the relationship between Spoa and Spores.The consensus mechanism is Spores, why is it the challenge of Spoa?In fact, they are a subordinate relationship.SPORES is the general name of this consensus mechanism, which contains a series of SPOA certification challenges that require miners.

To understand this, we will check how the VDF described in the previous section is used to unlock the SPOA challenge.

The above code expresses the process of how to use VDF (encrypted clock) to unlock the process of retrospective in the storage partition consisting of a certain SPOA number.

  1. At about a second, the VDF hash chain will output a checkpoint (Check);

  2. This checkpoint Check will calculate a hash H0 with the mining address (ADDR), partition index (INDEX (P)), and the original VDF seed (SEED).number;

  3. C1 is a retrospective offset. It is obtained by H0 except for the size of the partition.

  4. The 400 256 kb data blocks in the range of 100 MB from this start offset are the first traceable SPOA challenge that was unlocked.

  5. C2 is the starting offset of the second traceable range. It is obtained from the remaining number of H0 except for the sum of all partitions. It also unlocked the 400 SPOA challenges of the second traceable range.

  6. The constraints of these challenges are the SPOA challenge in the corresponding position of the first range in the second range.

  7. The performance of each packing partition

    The performance of each packaging partition refers to the number of spoa challenges generated by each partition at each VDF checkpoint.When the miners store the only copy of the partition Unique Replicas, the number of SPOA challenges will be greater than the number of COPIES that has the same data that stores the same data.

    The concept of “only copy” here is very different from the concept of “backup”. For details, you can read the past article “>Arweave 2.6 may be more in line with Satoshi Nakamoto’s vision》 Content.

    If the miners only deposit the only copy of the partition, each packaged partition will generate all the challenges of all the first traceability, and then generate the second round of tracers in the partition according to the quantity of the storage partition copy.If the entire ARWEAVE weaving network has the M part of the Communist Party of China, and the miners store the only copy of the n Division, then the performance of each packing partition is:

    When the partitions stored by miners are backup of the same data, each packaged partition will still produce all the first traceability challenges.But in the case of 1/m, the second traceable range will be located in this partition.This brings a significant performance punishment to this storage strategy behavior, and the ratio of the number of SPOA challenges is only:

    Figure 1: When a miner (or a group of miners) completes the packaging data set, the performance of the given zone will be improved.

    The blue thread in Figure 1 is the performance of the storage partition’s sole copy perf_ {unique} (n, m). The picture intuitively shows that when the miner only stores a small partition copy, the mining efficiency of each partition is the mining efficiency of each partition.Only 50%.When the storage and maintenance of all datasets, that is, n = m, the mining efficiency is 1 to maximize 1.

    Total hash

    The total hash rate (see Figure 2) is given by the following equations, and the value of each partition is obtained by N:

    The above formula shows that as the size of the weaving network increases, if the unique copy data is not stored, the punishment function (Penalty Function) increases the secondary growth as the number of storage partitions increases.

    Figure 2: The total mining hash rate of the only data set and backup data set

    Marginal partition efficiency

    Based on this framework, we will explore the decision -making problems faced by the miners when adding a new partition, that is, whether to choose to copy one of their existing partitions, or get new data from other miners and pack them into the only copy.When they have already stored the unique copy of n divisions from the most likely M region, their mining hash rate is proportional:

    Therefore, the additional income of adding a new partition is:

    The income of the replication of a partition (smaller) is::

    Expiring the first quantity to the second quantity, we get the relative marginal partition efficiency of the miners (Relative Marginal Partition Effering):

    Figure 3: Miners are motivated to build a complete copy (option 1) instead of making extra copy of the data they already have (Option 2)

    The RMPE value can be regarded as a punishment for miners to copy the existing partition when adding new data.In this expression, we can handle M to infinity, and then consider the efficiency weighing at different n values:

    • When the miners have a copy of the complete data set, the reward for a copy is the highest.Because if N approaches m and M tends to infinitely large, the value of RMPE is 3.This means that when it is close to a complete copy, the efficiency of finding new data is three times the efficiency of re -packaging existing data.

    • When the miners store half of the weaving network, for example, when n = 1/2 m, RMPE is 2.This means that the profit of the new data is twice the replication of existing data income.

    For lower N values, the RMPE value tends to but always greater than 1.This means that the income of the only copy of the storage is always greater than the income of copying existing data.

    With the growth of the network (M is infinite), the motivation for the construction of a complete copy will be enhanced.This promotes the creation of the cooperative mining team, which jointly stores the complete copy of at least one data set.

    This article is mainly introducedDetails of the construction of arweave consensus protocolOf course, this is just the beginning of this part of the core content.From the introduction of mechanisms and code, we can understand the specific details of the agreement very intuitively.Hope to help everyone understand.

  • Related Posts

    Binance removed from the shelves but soared. Alpaca dealer’s extreme trading

    Jessy, bitchain vision According to common sense, a token is removed from the exchange, which is a major negative news. However, this rule has not been perfectly reproduced on May…

    Binance launches Alpha points, understand all the rules

    Jessy, bitchain vision The requirements for participating in Binance Wallet TGE are getting higher and higher! Previously, the popularity of Binance Wallet’s exclusive TGE brought a large amount of data…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Reality slaps Web3 in the face How far is we from the real “decentralization”?

    • By jakiro
    • April 30, 2025
    • 4 views
    Reality slaps Web3 in the face How far is we from the real “decentralization”?

    Binance removed from the shelves but soared. Alpaca dealer’s extreme trading

    • By jakiro
    • April 30, 2025
    • 5 views
    Binance removed from the shelves but soared. Alpaca dealer’s extreme trading

    Capitalists and madmen who rushed to meme

    • By jakiro
    • April 30, 2025
    • 5 views
    Capitalists and madmen who rushed to meme

    Web3 Paradox: How Transparency Builds Trust and How to Disintegrate Trust

    • By jakiro
    • April 30, 2025
    • 5 views
    Web3 Paradox: How Transparency Builds Trust and How to Disintegrate Trust

    Grayscale: How Ethereum maintains pricing power by executing scaling strategies

    • By jakiro
    • April 30, 2025
    • 14 views
    Grayscale: How Ethereum maintains pricing power by executing scaling strategies

    Grayscale: Understand pledge rewards How to earn income from crypto assets

    • By jakiro
    • April 30, 2025
    • 14 views
    Grayscale: Understand pledge rewards How to earn income from crypto assets
    Home
    News
    School
    Search