Can we save AI computing power after WEB3 after the GTC conference?

The resurrection of clouds and algament -in the name of AI

Fashion is a cycle, as is web 3.

Near “re -” became the AI ​​public chain. One of the identities of the founder Transformer’s author allowed him to attend the Nvidia GTC Conference. He and Lao Huangyou talked about the future of the generation AI. Solana was successfully transformed by the settlement of IO.NET, Bittensor, Render Network.For the AI ​​concept chain, there are also alien emergences involved in GPU computing Akash, Gaimin, Gensyn, etc.

If we raise our sights, while the price of currency rises, we can find a few interesting facts:

  1. The dispute between the GPU computing power comes to the decentralized platform. The more computing power, the stronger the calculation effect.

  2. The calculation paradigm from cloudy to decentralization transitions, behind the needs of AI training to reasoning, the model on the chain is no longer the talk of Kong Xuan;

  3. The underlying software and hardware composition and operating logic of the Internet architecture have not fundamentally changed, and the decentralized calculation force layer has more incentives for networking.

  4. The concept of advanced concepts, the cloud albon power of the web3 world was born in the era of cloud mining, which refers to the packaging of the computing power of the mining machine to avoid huge expenditures for users to purchase mining machines, but computing power manufacturers often “oversold”, such as, such as supermarketing, such as supermarketing, such as supermarketing, such as supermarketing “Mixing the computing power of 100 ore machines to 105 people to obtain excess returns, and eventually the word is equivalent to deceiving.

    The cloud algament power in this article specifically refers to the computing power of cloud manufacturers based on GPU. The problem here is to decentralized computing power platform whether it is the front desk of cloud manufacturers or the next version update.

    The combination of traditional cloud manufacturers and blockchain is deeper than we imagine. For example, public chain nodes, development and daily storage will basically start around AWS, Alibaba Cloud and Huawei Cloud to avoid expensive investment in purchasing physical hardware.However, the problems brought about cannot be ignored. Under extreme circumstances, the network cable will cause the public chain to downtate, and seriously violate the decentralization spirit.

    On the other hand, decentralized computing power platform either directly builds a “computer room” to maintain the stability of the network or directly build an incentive network, such as IO.NET’s empty investment GPU quantity strategy, just like Filecoin’s storage FIL token,The starting point is not to meet the needs of the use, but to empower the tokens. One evidence is that large factories, individuals or academic institutions will rarely really use them to perform ML training, reasoning or graphic rendering, which will cause serious waste of resources.

    It’s just that in the face of high currency prices and FOMO emotions, all the accusations of decentralized computing power are cloud -to -algorithm scams disappeared.

    >

    Two types of cloud algament, do you have the same name with the same name?

    Reasoning and FLOPS, quantitative GPU computing power

    The computing power requirements of the AI ​​model are evolving from training to reasoning.

    Take OPENAI’s Sora as an example. Although it is also made based on Transformer technology, its parameters are compared to GPT-4 trillion levels. The academic community speculates that under 100 billion levelThe cost is low, which is also very easy to understand. The computing resources required for less parameters are also attenuation.

    However, in turn, Sora may need a stronger “reasoning” ability. The reasoning can be understood as the ability to generate specific videos based on the instruction. The video is long been regarded as creative content. ThereforeIt can be understood that the law is summarized according to the existing content, and the calculation power without brain has a miracle.

    At the previous stage, the AI ​​computing power was mainly used for training, a small part was used for reasoning capabilities, and was basically rounded up by various products of Nvidia.Reasoning capabilities, superposing the weight loss and accuracy of the model, the brain’s logic, is slowly becoming the mainstream.

    In addition, the classification of the GPU is added. It can often be seen that it is the saving AI of the stinky game. The reasonable thing is that the strong demand of high -performance GPUs in the game market covers research and development costs, such as 4090 graphics cards, playing games and the game of playing games.AI alchemy can be used, but it should be noted that slowly the game card and computing power card will be gradually decoupled. This process is similar to Bitcoin mining machines from personal computers to special mining machines.From the order of CPU, GPU, FPGA and ASIC.

    >

    LLM special card research and development …

    With the maturity and progress of AI technology, especially the LLM route, more similar attempts of TPUs, DPUs and LPUs will be more and more. Of course, the main products are currently the GPU of Nvidia.It is more time to supplement the GPU, which takes time to completely replace it.

    The decentralized computing power competition does not fight for the GPU’s pick -up channel, but tries to build a new profit model.

    At this point, Nvidia is almost the protagonist, and it is basically Nvidia’s share of 80% of the graphics card market. There is only theoretically in theory of N cards and A cards. In reality, everyone is suspected of integrity.

    The absolute monopoly position has created the grand occasion of each family to fight for the GPU. From the consumer -level RTX 4090 to the A100/H100 of the enterprise -level, the main cloud manufacturers are the main force of stocking.However, Google, Meta, Tesla, and OpenAI involved AI companies have their own chips action or plans. Domestic companies have turned to domestic manufacturers such as Huawei, and the GPU track is still crowded.

    For traditional cloud manufacturers, the sale is actually computing power and storage space.The manufacturer’s computing business focuses on cheap and easy to gain, but will the probability of web3 AI chips in the future will not have a high probability like Bitcoin mining.

    In addition, since the Ethereum turns to POS, the special hardware of the currency circle has become less and less. The market size of SAGA mobile phones, ZK hardware acceleration and DEPIN is too small. I hope that decentralized computing power can explore for dedicated AI computing power cards for dedicated AI computing power cardsOut a web3 characteristic road.

    Is the decentralized computing power the next step of the cloud or supplement.

    The computing power of the GPU, the industry is usually compared with Flops (Floating Point Operations Per Second, the number of floating points per second). This is the most commonly used indicator of calculation speed.FLOPS is high and low.

    From local calculation to Shangyun, it has been in about half a century, and the distributed concept exists from the beginning of the computer. Driven by LLM, decentralization and computing power are not as illusory as before.To summarize the existing decentralized computing power project as much as possible, the inspection dimension is only two points:

    1. The number of hardware such as GPU, that is, to examine its computing speed. According to Moore’s Law, the stronger the calculation power of the new GPU, the more the calculation power is

    2. The incentive layer organizational method, this belongs to the industry characteristics of the web3, double -generation, Canada -regulatory functions, airdrop incentive measures, etc., which is easier to understand the long -term value of each project, rather than over -paying the short -term currency price.GPU.

    3. From this perspective, the decentralized computing power is still based on the DEPIN route of “existing hardware+incentive network”, or the Internet architecture is still the bottom layer., Focusing on no access access, the real networking still requires the cooperation of hardware.

      The computing power should be decentralized, and the GPU must be centralized

      With the three dilemma of the blockchain, the security of decentralized computing power does not need to be special consideration, mainly decentralization and scalability, the latter is the purpose of the GPU network after network network. It is currently in AI.state.

      Starting from a paradox, if the decentralized computing power project wants to be done, then the number of GPUs on the network should be as much as possible. The reasons are not without him.Unable to have training or reasoning effects.

      Of course, compared to the absolute control of cloud manufacturers, at the current stage, decentralized computing power projects can set up at least no accurate entry and free migration GPU resources.The product may also be uncertain.

      In scalability, the GPU can be used not only for AI, but also cloud computers and rendering are also feasible paths. For example, Render Network focuses on rendering work, and Bittensor and other focus on providing model training.Use scenes and uses.

      Therefore, two parameters can be added outside the GPU and incentive network, which are decentralization and scalability, which forms a comparison indicator of four angles. Please note that this method is different from the technical comparison.

      In the above projects, Render Network is actually very special. It is essentially a distributed rendering network. The relationship with AI is not direct. In AI training and reasoning, all links are intertwined.Stochastic Gradient Descents are also required to be consistent with algorithms such as reverse communication, but rendering and other tasks do not have to do so. They often cut videos and pictures to facilitate task distribution.

      Its AI training capabilities are mainly connected to IO.NET. As a plug -in of IO.NET, they are all working in the GPU. How can it be done?Afterwards, it proves that Solana is more suitable for high performance requirements such as rendering networks.

      Secondly, IO.NET’s violent exchange of GPU development routes. At present, the official website lists 180,000 GPUs. It is in the first gear in the decentralized computing power project.In terms of scalability, IO.NET focuses on AI reasoning, and AI training belongs to the way to work.

      In strictly speaking, AI training is not suitable for distributed deployment. Even lightweight LLMS will not be less than where the number is, the centralized calculation method is more cost -effective in economic costs. Web 3 andThe binding point of AI in training is more data privacy and encryption operations, such as technologies such as ZK and FHE, and AI reasoning web 3 is promising. On the one hand, it has not high requirements for the GPU computing performance, which can tolerate a certain degree of loss to a certain degree of loss.On the other hand, AI reasoning is closer to the application side, and incentives from the perspective of user perspectives are even more considerable.

      Filecoin, another mining to the tokens, also reached an GPU utilization agreement with IO.NET. Filecoin uses its own 1,000 GPU and IO.NET to network.Essence

      Once again, Gensyn, which has not been launched, we also come to the cloud evaluation. Because it is still in the early days of network construction, the number of GPUs has not been announced, but its main use scenario is AI training. I personally feel that the number of high -performance GPUs will not have less requirements for the number of high -performance GPUs.At least exceeding the level of Render Network. Compared with AI reasoning, AI training and cloud manufacturers are direct pairing relationships, and they will be more complicated in the design of specific mechanisms.

      Specifically, Gensyn needs to ensure the effectiveness of model training. At the same time, in order to improve training efficiency, a large -scale use of chain computing paradigm is used on a large scale.

      • Submitters: Task initiator, eventually pay for training costs.

      • Solvers: Training models and provides effective proof.

      • VERIFIERS: Verifying model effectiveness.

      • WhistleBlowers: Check the work of the verification.

      Overall, the operation method is similar to POW mining+ optimistic proof mechanism. The structure is very complicated. Perhaps the calculation is transferred to the chain to save costs, but the complexity of the architecture will bring additional operating costs. At presentFocusing on the pass of AI reasoning, I also wish Gensyn good luck here.

      Finally, the old Akash’s Akash, basically starting with Render Network. Akash focuses on the decentralization of CPU. Render Network focused on the decentralization of the GPU.Akash pays more attention to reasoning.

      The key to Akash’s rejuvenation of the New Year is to see the mine -upgraded mine problem. The idle GPU can not only hang free fish in the name of female college students’ second -hand self -use, but now can also engage in AI together, anyway, they contribute to human civilization.

      However, Akash has a advantage that the tokens are basically circulating. After all, it is an old project. It is also actively adopting the pledge system commonly used by POS, but it depends on the team.feel.

      In addition, there is the THETA of the edge cloud computing, which provides Phoenix of the solution of AI computing power, as well as computing old and new precious calculations such as Bittersor and Ritual.No parameters such as the number of GPUs.

      Conclusion

      Throughout the history of computer development, all kinds of computing paradigms can be built and decentralized versions. The only regret is that they have no impact on mainstream applications. The current web3 computing project is mainly from the industry.It is also because of the author of Transformer, not Near’s founder.

      What’s even more pessimistic is that the scale of the current cloud computing market and players are too powerful. Can IO.NET replace AWS? It is really possible if the number of GPUs is enough. After all, AWS also uses open source Redis as the underlying component.

      In a sense, the power of open source and decentralization is consistent, and the decentralized projects are excessively concentrated in the financial fields such as DEFI, and AI may be a key path to cut into the mainstream market.

  • Related Posts

    Binance launches Alpha points, understand all the rules

    Jessy, bitchain vision The requirements for participating in Binance Wallet TGE are getting higher and higher! Previously, the popularity of Binance Wallet’s exclusive TGE brought a large amount of data…

    Bankless: What are the decentralized content creation platforms worth paying attention to?

    Author: William M. Peaster, Bankless; compiled by: Tao Zhu, Bitchain Vision I have been writing in the field of crypto since 2017.Since then, I have turned writing into a career…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Grayscale: How Ethereum maintains pricing power by executing scaling strategies

    • By jakiro
    • April 30, 2025
    • 13 views
    Grayscale: How Ethereum maintains pricing power by executing scaling strategies

    Grayscale: Understand pledge rewards How to earn income from crypto assets

    • By jakiro
    • April 30, 2025
    • 12 views
    Grayscale: Understand pledge rewards How to earn income from crypto assets

    The Ethereum Foundation’s new era: dual leadership and strategic transformation

    • By jakiro
    • April 29, 2025
    • 17 views
    The Ethereum Foundation’s new era: dual leadership and strategic transformation

    Sanshang Yuya issued coins: a crypto-demonic wind with top traffic

    • By jakiro
    • April 29, 2025
    • 10 views
    Sanshang Yuya issued coins: a crypto-demonic wind with top traffic

    The Pectra mainnet is confirmed to be activated on May 7. What updates are there?

    • By jakiro
    • April 29, 2025
    • 18 views
    The Pectra mainnet is confirmed to be activated on May 7. What updates are there?

    The Ethereum Foundation sets off again: new management, vision and focus for the next year

    • By jakiro
    • April 29, 2025
    • 17 views
    The Ethereum Foundation sets off again: new management, vision and focus for the next year
    Home
    News
    School
    Search