
Compiled by: Fairy, ChainCatcher
Editor’s note: Through the halo of technology, the author saw multiple obstacles such as capital and hardware faced by the Web3 project in promoting the development of AI.Although the original intention of Web3 is to break the centralization and realize the ideal of decentralization, in actual operations, it is often influenced by market narratives and token incentives and deviates from the original intention.
ChainCatcher compiles the original text as follows:
Calls for AI and Web3 are growing, but this is no longer an optimistic VC article.We are optimistic about merging the two technologies, but the text below is a call.Otherwise, this optimism will not be achieved.
Why?Because developing and running the best AI model requires huge capital expenditures, state-of-the-art hardware is often difficult to obtain and requires very specific research and development.As most Web3 AI projects are doing, crowdsourcing these resources through cryptographic incentives is not enough to offset the tens of billions of dollars invested by large companies that control AI growth.Given the hardware limitations, this may be the first large software paradigm, and neither smart and creative engineers outside of existing organizations can break it.
Software is “eating the world” at an increasingly rapid pace and will soon grow exponentially with the acceleration of artificial intelligence.In the current situation, all of these “cakes” are flowing to tech giants, while end users, including governments and large enterprises, are more subject to their power.
Dislocated incentive mechanism
All this happens at a very inappropriate time – 90% of decentralized network participants are busy pursuing the “golden egg” of narrative-driven easy fiat-money gains.
Developers are following investors in our industry, not the other way around.This situation has various manifestations, ranging from public recognition to more subtle subconscious motivations, but narratives and the markets formed around them drive many decisions in Web3.Like the traditional reflection bubble, participants are too focused on the inner world to notice the outer world unless this helps further the narrative of the cycle.And artificial intelligence is obviously the biggest narrative because it itself is in a stage of booming development.
We have spoken with dozens of teams in the cross-section of AI and cryptocurrency to confirm that many of them are very capable, mission-oriented and passionate builders.But this is what human nature is. When faced with temptations, we often succumb to them and then rationalize these choices afterwards.
The path to ease of liquidity has been a historical curse for the crypto industry – at this point it has delayed years of development and valuable adoption.It has even turned the most loyal crypto believers in the direction of “pulling up tokens”.The rationale is that builders holding tokens may have better opportunities.
The low complexity of institutional and retail capital provides builders with the opportunity to make claims out of reality while also benefiting from valuation as if these claims have been fulfilled.The consequences of these processes are actually deep-rooted moral hazard and capital damage, and few such strategies can work in the long run.Demand is the mother of all inventions. When demand disappears, inventions disappear.
The timing of this situation cannot be worse.While all the smartest tech entrepreneurs, state actors and businesses big and small are racing to ensure a share of the AI revolution, cryptocurrency founders and investors chose “10 times faster.”And in our opinion, this is the real opportunity cost.
Overview of the Prospects of Web3 Artificial Intelligence
Given the above incentive mechanism, the classification of Web3 artificial intelligence projects can actually be divided into:
-
Reasonable (can also be subdivided into realists and idealists)
-
Semi-reasonable
-
False
Fundamentally, we believe project builders should know clearly how to keep up with their Web2 competitors and know which areas are competitive and which are delusional, although these delusional areas may go to venture capital firms and the publicsell.
Our goal is to be able to compete here and now.Otherwise, the speed of artificial intelligence development may leave Web3 behind, and the world will leap to “Web4” between Western corporate artificial intelligence and China’s national artificial intelligence.Those who are not able to be competitive in time and rely on distributed technology to catch up on longer time frames are too optimistic and not enough to be taken seriously.
Obviously, this is just a very rough summary, even the “falsemaker” group has at least a few serious teams (perhaps more just delusionals).But this article is a call, so we have no intention of being objective, but rather calling on readers to have a sense of urgency.
Reasonable:
There are not many founders of the solution to develop “artificial intelligence on-chain” middleware. They understand that decentralized training or reasoning of models (i.e. cutting-edge technologies) that users actually need is not feasible or even impossible.
So finding a way to connect the optimal centralized model with an on-chain environment and benefiting from complex automation is a good enough first step for them.Currently, hardware isolated TEEs (“air isolation” processors) that can host API access points, bidirectional oracles (for bidirectional indexing on-chain and off-chain data), and co-processing for providing a verifiable off-chain computing environment for agentsThe device architecture seems to be the best solution at present.
There is also a coprocessor architecture that uses zero-knowledge proofs (ZKPs) to snapshot state changes (rather than verifying full calculations), which we believe will be feasible in the medium term.
For the same problem, a more ideal approach is to try to validate off-chain inference to make it consistent with on-chain calculations in terms of trust assumptions.
We believe that the goal of doing this should be to enable artificial intelligence to perform on-chain and off-chain tasks in a unified operating environment.However, most proponents of inference verifiability talk about tricky goals such as “trust model weights” that will actually become relevant within a few years, if any.Recently, the founders of this faction have begun to explore alternative ways to validate reasoning, but were initially based on ZKP.While many smart teams are working on ZKML (i.e., zero-knowledge machine learning), they expect encryption optimization to exceed the complexity and computing requirements of AI models and take too much risk.Therefore, we think they are not suitable for competition at the moment.However, some recent progress is still interesting and should not be overlooked.
Semi-reasonable:
Consumer applications use wrappers that encapsulate closed and open source models (e.g., stable diffusion for image generation or Midjourney).Some of these teams were the first to enter the market and gained recognition from actual users.Therefore, it is not fair to call it a fake, but only a few teams are thinking deeply about how to develop their underlying models in a decentralized way and innovate in motivating design.In the token section, there are also some interesting governance/ownership designs.However, most of these projects simply put a token on an otherwise centralized package based on OpenAI APIs to obtain a valuation premium or bring faster liquidity to the team.
The problem that neither camp has solved is the training and reasoning of large models in a decentralized environment.At present, if you do not rely on a tightly connected hardware cluster, you cannot train the basic model within a reasonable time.Given the level of competition, “reasonable time” is a key factor.
There have been some promising research results recently, and in theory, methods such as “Differential Data Flow” may be extended to distributed computing networks in the future to improve their capacity (as network capabilities catch up with data flows.Require).However, competitive model training still requires communication between localized clusters rather than single distributed devices and cutting-edge computing (retail GPUs are increasingly uncompetitive).
Research on localized reasoning by reducing the size of the model (one of the two methods of decentralization) has also made recent progress, but there is no existing protocol to take advantage of it in Web3.
The problem of decentralized training and reasoning logically brings us to the last of the three camps and by far the most important one, and therefore the most emotionally triggering one for us.
Fake:
Infrastructure applications are mainly concentrated in the field of decentralized servers, providing bare hardware or decentralized model training/hosting environments.There are also software infrastructure projects that are driving protocols such as federated learning (decentralized model training), or those that combine software and hardware components into a platform where people can basically train and deploy them end-to-end.decentralized model.Most of them lack the complexity needed to actually solve the problem described, and the naive idea of “token incentives + market support” prevails here.None of the solutions we see in the public and private markets can achieve meaningful competition here and now.Some programs may develop into viable (but niche) products, but what we need now is fresh and competitive programs.This can only be achieved through innovative designs that solve distributed computing bottlenecks.Not only is speed a big problem in training, but also the verifiability of completed work and coordination of training workloads, which adds bandwidth bottlenecks.
We need a competitive, truly decentralized basic model that requires decentralized training and reasoning to work.Losing artificial intelligence may completely deny all achievements made by “decentralized world computers” since the advent of Ethereum.If computers become artificial intelligence, and artificial intelligence is centralized, then there is no way for the world’s computers to talk about except for some dystopian version.
Training and reasoning are at the heart of AI innovation.As other areas of the AI world are moving towards closer architectures, Web3 needs some orthogonal solutions to compete with it, as the feasibility of head-on competition is getting lower and lower.
The scale of the problem
It’s all about calculation.The more you invest in training and reasoning, the better the results will be.Yes, there may be some tweaks and optimizations here, and there may be some tweaks and optimizations there, and the calculation itself is not homogeneous.There are all sorts of new ways to overcome the bottlenecks of traditional von Neumann architecture processing units, but it all boils down to how many times you can do matrix multiplication on how big a block of memory and how fast it is.
That’s why we’re seeing so-called “hyperscale” building so powerfully in the data center, they all want to create a full stack with AI models at the top and hardware powering them at the bottom: OpenAI (Models))+Micropic (Computing), Anthropic (Model)+AWS (Computing), Google (both have both) and Meta (both have more and more by redoubled efforts to build your own data center).There are more nuances, interaction dynamics and stakeholders, but we won’t list them all.Overall, hyperscale enterprises are investing billions of dollars in data center construction like never before and create synergies between their computing and AI products, which are expected to be generated as AI becomes popular in the global economy.Huge profits.
Let’s take a look at these 4 companies’ expected construction levels this year only:
Jensen Huang, CEO of NVIDIA™ (NVIDIA®), once proposed that a total of $1 trillion in funding will be invested in the field of AI acceleration in the next few years.Recently, he doubled the forecast to $20,000, allegedly because he saw interest in sovereign enterprises.
Analysts at Altimeter expect global AI-related data center spending to reach $160 billion and more than $200 billion in 2024 and 2025, respectively.
Now compare these numbers to the incentives Web3 provides for independent data center operators to drive them to expand their capital expenditures on the latest AI hardware:
Currently, the total market value of all decentralized physical infrastructure (DePIn) projects is currently about US$40 billion, mainly composed of relatively low liquidity and mainly speculative tokens.Basically, the market capitalization of these networks is equal to the upper limit estimate of the total capital expenditure of their contributors, as they use tokens to incentivize such construction.However, the current market capitalization is almost useless as it has been released.
So let’s assume that another $80 billion (2 times the existing value) of private and public DePIn token capital will appear in the market over the next 3-5 years as an incentive, and assume that these tokens will be 100%Used for artificial intelligence use cases.Even if we divide this very rough estimate by 3 (years) and compare its dollar value to the cash value of hyperscale companies invested only in 2024, it is clear that token incentives are imposed on a bunch of “The decentralized GPU networking project is not enough.
In addition, billions of dollars of investor demand is needed to absorb these tokens, as operators of these networks sell large quantities of mined tokens to cover the significant costs of capital and operating expenses.More funds are also needed to drive the rise of these tokens and incentivize expansion of construction to surpass super-large companies.
However, those with a deep understanding of how Web3 servers currently operate may think that a large part of the “decentralized physical infrastructure” is actually running on cloud services at these hyperscale companies.Of course, the surge in demand for GPUs and other AI-specific hardware is driving more supply, which will ultimately make cloud rentals or purchases cheaper.At least that’s what people expect.
But at the same time, we must also consider: Now Nvidia needs to prioritize customers’ needs for its latest generation GPUs.Nvidia has also begun competing with the largest cloud computing providers on its own territory—providing AI platform services to enterprise customers already locked in these super calculators.This will eventually prompt it to either build its own data center over time (which essentially erode the huge profits they enjoy now, so unlikely) or to significantly limit its AI hardware sales to its partner network cloud providersWithin the scope of.
In addition, Nvidia competitors, which are launching additional AI-specific hardware, mostly use the same chips as Nvidia produced by TSMC.Therefore, at present, basically all artificial intelligence hardware companies are competing for TSMC’s production capacity.TSMC also needs to prioritize certain customers.Samsung and potential Intel (Intel is trying to return to the most advanced chip manufacturing sector as soon as possible to produce chips for its own hardware) may be able to absorb additional demand, but TSMC is currently producing most AI-related chips, and for cutting-edge chip manufacturing(3 and 2 nm) expansion and calibration take years.
Finally, due to the United States’ restrictions on NVIDIA and TSMC, China is basically out of reach for the latest generation of artificial intelligence hardware.Unlike Web3, Chinese companies actually have their own competitive model, especially LLMs from companies such as Baidu and Alibaba, which require a large number of previous generation devices to run.
Due to one of the above reasons or the superposition of various factors, as the battle for artificial intelligence intensifies and takes priority over cloud services, hyperscale enterprises will restrict external access to their artificial intelligence hardware, which is a non-substantial risk.Basically, this is a situation where they take all the AI-related cloud capacity for themselves, no longer provide it to others, while also swallowing up all the latest hardware.In this way, other large companies, including sovereign states, will put higher demands on the remaining computational supply.At the same time, the remaining consumer-grade GPUs are becoming increasingly less competitive.
Obviously, this is just an extreme situation, but if the hardware bottleneck still exists, big players will retreat due to the high bonus.This way, decentralized operators like secondary data centers and retail hardware owners (which account for the majority of Web3 DePIn providers) are excluded from competition.
The other side of the coin
While the founders of cryptocurrencies are still sleeping, AI giants are paying close attention to cryptocurrencies.Government pressure and competition may prompt them to adopt cryptocurrencies to avoid being shut down or being strictly regulated.
The founder of Stability AI recently resigned in order to start “decentralizing” his company, which is one of the earliest public hints.He had not previously concealed his plan to launch the tokens after the company’s successful listing, which in part exposed the real motivation behind the expected action.
Similarly, while Sam Altman is not involved in the operation of his co-founded crypto project Worldcoin, his token trading is undoubtedly like an agent for OpenAI.Whether there is a way to connect Internet token projects with AI R&D projects is only time to tell us, but the Worldcoin team also seems to realize that the market is testing this hypothesis.
It makes great sense for us to explore different decentralized paths for AI giants.The problem we see again here is that Web3 does not produce meaningful solutions.”Governance Tokens” are just a meme, and currently only tokens that explicitly avoid the direct link between asset holders and their network development and operations, such as BTC and ETH, are truly decentralized tokens.
The incentive mechanism that leads to slow technological development has also affected the development of encryption network design in different governance.The startup team just pastes a “governance token” on its products, hoping to find a new way in the process of getting ready to go, but in the end they can only stick to their own complacency in the “governance theatre” surrounding resource allocation.
in conclusion
The AI competition is ongoing and everyone is very serious about it.In the thinking of large tech giants expanding their computing power, we can’t find any vulnerability – more computing means better AI, and better AI means lower costs, increase new revenue and expand market share.For us, this means that the bubble is reasonable, but all counterfeiters will still be eliminated in the inevitable reshuffle in the future.
Centralized large-scale enterprise artificial intelligence is dominating this field, and it is difficult for startups to keep up.Although the Web3 field is late, it is joining this competition.Compared to startups in the Web2 space, the market has been rewarded with too generously for crypto AI projects, which has led the founders to shift their attention from product delivery to driving token prices at critical moments, and this time is quickly closing.So far, no innovation has been able to circumvent scaling of computing for competition.
Now, around the consumer-oriented model, a credible open source movement has emerged, and initially, only some centralized enterprises chose to compete for market share with larger closed-source rivals (such as Meta, Stability AI).But now, the community is catching up, putting pressure on leading AI companies.These pressures will continue to affect closed-source development of AI products, but the impact will not be very big until open source products catch up.This is another great opportunity in the Web3 field, but only if it has to solve the problem of decentralized model training and reasoning.
Therefore, while on the surface, the “classic” disruptor opportunity exists, the reality is far from that.Artificial intelligence is closely related to computing. If there is no breakthrough innovation in the next 3-5 years, this cannot be changed, and this is a critical period for determining who controls and guides the development of artificial intelligence.
The computing market itself, although demand drives the efforts of the supplier, it is impossible to “broaden flowers” because competition among manufacturers is restricted by structural factors such as chip manufacturing and economies of scale.
We are still optimistic about human intelligence and are convinced that there are enough smart and noble people who can try to crack AI in a way that benefits the free world, rather than top-down corporate or government controls.problem.However, this opportunity seems very slim, at best, just toss a coin, but the founder of Web3 is busy to tossing coins to get economic benefits rather than having a real impact on the world.