
Author: Gryphsis Academy Source: medium Translation: Shan Oppa, Bitchain Vision
summary:
-
By the end of 2022, commercial applications of generative AI will sweep the world, but as the freshness fades, some current problems with generative AI are emerging.The increasingly mature Web3 field utilizes the transparency, verifiability and decentralized nature of blockchain to provide a new perspective to address these generative AI problems.
-
Generative artificial intelligence is an emerging technology in recent years, based on the deep learning neural network framework.Its application in image generation models and ChatGPT language models has shown great commercial potential.
-
In Web3, the architecture that implements generative AI includes infrastructure, models, applications, and data.Data components, especially when integrating with Web3, are critical and have great growth potential.It is worth noting that blockchain-based data models, artificial intelligence agency projects and application in professional fields may become key areas for future development.
-
The Web3 AI protocol on the market currently has shortcomings in fundamentals and has limited ability to acquire token value.We look forward to new trends in the token economy in the future.
-
Generative AI has great potential in the Web3 space, and its integration with other software and hardware technologies is expected to lead to exciting future developments.
1. Why do generative AI and Web3 need each other?
2022 is the watershed for generative artificial intelligence, before that, generative artificial intelligence was mainly an auxiliary tool for professionals.This has changed dramatically with the advent of DALL-E 2, Stable Diffusion, Imagen and Midjourney.These technologies have pushed Artificial Intelligence Generated Content (AIGC) to the forefront of technological trends, setting off a popular content boom on social media.The ChatGPT released soon after changed the game and pushed the trend to its peak.
As the first AI tool to answer almost every question with just a simple text prompt, ChatGPT quickly became a daily work assistant for many people.It can handle various tasks such as document writing, homework tutoring, email assistance, paper editing and even emotional consultation, which has triggered a heated discussion online about the optimization results through “magic prompts”, allowing people to truly feel the “intelligence” of artificial intelligence.
Generative AI could boost U.S. labor productivity growth, potentially driving global GDP (or nearly $7 trillion) by 7% in a decade and boost productivity growth by 1.5 percentage points, according to a report from Goldman Sachs Macro team..
The Web3 field also felt the positive impact of AIGC (artificial intelligence generated content).In January 2023, Web3’s AI sector rose across the board.
However, as the initial excitement began to fade, ChatGPT’s global traffic saw its first decline since its launch in June 2023 (data from SametimeWeb).This recession marks a timely opportunity to rethink the importance and limitations of generating artificial intelligence.
The current challenges facing generative artificial intelligence include, but are not limited to: the prevalent unauthorized and untracked AIGC flooding social media platforms; the high maintenance costs of ChatGPT force OpenAI to reduce output quality as a measure to reduce costs and improve efficiency;Global large-scale models such as ChatGPT still have biases in some aspects due to imbalance in data distribution.
As the initial enthusiasm for generative AI like ChatGPT fades, the mature and evolving Web3 domain, with its decentralization, transparency and verifiability, provides new solutions to the challenges facing generative AI:
1. Web3 transparency and traceability can solve copyright and privacy issues related to AI-generated content
Web3’s transparency and traceability can effectively verify content source and authenticity, thereby significantly increasing the cost of producing fraudulent or infringing AI-generated content, such as copyrighted TikTok mix videos or privacy-invading DeepFake videos.Smart contracts in content management can solve copyright issues and ensure that creators are fairly compensated.
2. Web3’s decentralization reduces the risk of centralized AI computing
Developing generative artificial intelligence requires a lot of computing resources.For example, training a GPT-3-based ChatGPT costs more than $2 million, with a daily electricity bill of about $47,000, and these costs are expected to grow exponentially as technology and scale advance.
At present, large amounts of computing resources are concentrated in the hands of large companies, resulting in high development, maintenance and operation costs, centralized risks, and it is difficult for small companies to compete.Although training of large models may still need to be centralized due to their extensive computing needs, Web3’s blockchain technology can enable distributed model inference, community voting governance, and model tokenization.
Taking decentralized exchanges as an example, we can imagine a community-driven decentralized artificial intelligence model inference system where the community owns and manages large models.
3. Use Web3 to achieve diverse AI datasets and interpretable AI models
Traditional data collection methods are often constrained by geography and culture, resulting in subjective biases in content generated by AI and ChatGPT responses, such as changing the skin color of the target task.Web3’s token incentive model optimizes data collection, collection and weighting data from around the world.In addition, Web3 transparency and traceability enhance the interpretability of the model and encourage diversified output to enrich the model.
4. Unique AI model of massive data on the Web3 chain
AI model design and training are usually built around the target data format (text, speech, image, or video).A unique future direction for the convergence of AI and Web3 is to develop large-scale models for on-chain data, similar to natural language models.
This approach can provide unique insights that are not available in traditional data analytics (such as intelligent fund tracking, project funding flow), and AI has the ability to process large amounts of data simultaneously.
5. Generative AI serves as a catalyst to reduce the barriers to entry of Web3
Currently, mainstream participation in Web3 projects requires in-depth understanding of complex on-chain concepts and wallet operations, which increases learning costs and error risks.By contrast, Web2 applications are designed around the “lazy principle” and allow users to get started easily and safely.
Generative AI can significantly enhance the user experience by acting as a “smart assistant” between users and protocols in Web3 to assist in intent-centric projects.
2. Summary of generative AI technology
2.1 Technical background of generative artificial intelligence
Since the concept of artificial intelligence was proposed in the 1950s, it has experienced several peaks and troughs, and every key technological innovation has triggered a new wave.
Generative AI, an emerging concept proposed in the past decade, stands out among various AI research directions for its impressive technology and product performance, attracting global attention overnight.Before delving into the technical architecture of generative artificial intelligence, it is necessary to define the meaning of generative artificial intelligence in this article and briefly review the core technologies of recently popular generative artificial intelligence.
Generative AI is an AI used to create new content and ideas, including conversations, stories, images, videos, and music.It is built on the deep learning neural network framework and trains using large data sets containing a large number of parameters.
Generative AI products that have recently entered the public’s vision can be roughly divided into two categories: one is image (video) generation products based on text or style input, and the other is ChatGPT-like products based on text input.Both categories rely on the same core technology: pre-trained large language models (LLM) based on the Transformer architecture.
The former class combines text input with diffusion models to generate high-quality images or videos, while the latter uses human feedback reinforcement learning (RLHF) to generate outputs that are very similar to human logic.
2.2 Current technical architecture of generative AI:
There are many excellent articles that discuss the impact of generative artificial intelligence on existing technology architectures from different perspectives.For example, a comprehensive article by A16z entitled “Who owns a Generative AI Platform” comprehensively summarizes the current technical architecture of generative AI.
According to this research, the generative AI architecture in the current Web2 era is divided into three levels: infrastructure (computing power), models, and applications.The article also provides insights into the current developments at these three levels.
Infrastructure: At present, the focus is still mainly on the logic of Web2 infrastructure, and there are few projects that truly integrate Web3 and AI.Infrastructure has gained the greatest value at this stage.The Web2 giant, which has been deeply involved in the storage and computing field for decades, has made huge profits by “selling shovels” in the exploration stage of artificial intelligence.
Model: Ideally, the model should be the true creator and owner of artificial intelligence.However, few business models currently allow the authors of these models to capture the corresponding business value.
Applications: Applications developed in multiple verticals have generated over hundreds of millions of dollars in revenue.However, high maintenance costs and low user retention pose challenges to maintaining these applications as a viable long-term business model.
2.3 Application of Generative Artificial Intelligence in Web3
2.3.1 Using AI to analyze massive Web3 data
Data is the cornerstone of building technical barriers to the future development of artificial intelligence.To understand its importance, we first look at the research on the source of performance of large artificial intelligence models.
This study demonstrates the unique emergence of large AI models: when the model size exceeds a certain threshold, the model accuracy suddenly surges.As shown in the figure, each chart represents a training task and each line represents the performance (accuracy) of a large model.
Experiments on various large models consistently concluded that after a certain threshold is exceeded, the model performance will experience breakthrough growth on different tasks.
In essence, quantitative change in model size will lead to qualitative change in model performance.This size is related to the number of model parameters, training duration, and the quality of the training data.Currently, there are two main ways to develop leading products when there is no significant difference in model parameters (designed by top research teams of each company) and training duration (most of the computing hardware is purchased from NVIDIA).
The first is to identify and address specific pain points in the niche area, which requires a deep understanding and insight into the target area.Second, a more practical approach is to collect more comprehensive data than competitors.
This opens an excellent entry point for generative AI large models to enter the Web3 realm.Existing AI big models or basic models are trained on massive data from various fields, and the uniqueness of on-chain data in Web3 makes on-chain data a exciting and feasible way.
In Web3, there are currently two product logics at the data level: the first is to incentivize data providers, encourage users to share data usage rights, and protect data privacy and ownership.The Ocean Protocol provides an effective data sharing model in this regard.The second approach involves projects that integrate data and applications in order to provide users with task-specific services.For example, Trusta Lab collects and analyzes users’ on-chain data and provides services such as witch account analysis, on-chain asset risk analysis, etc. through its unique MEDIA scoring system.
2.3.2 Application of AI Agent in Web3
As mentioned earlier, the application of on-chain artificial intelligence agents is booming.With large language models and prioritizing user privacy, they provide quantifiable on-chain services.According to a blog post by Lilian Weng, chief artificial intelligence researcher at OpenAI, AI agents can be divided into four parts: Agent = LLM (large language model) + planning + memory + tool usage.
As the core of AI Agent, LLM handles external interactions, learns from large amounts of data, and expresses it logically in natural language.Planning + memory aspect is similar to the concepts of actions, strategies and rewards in reinforcement learning techniques used to train AlphaGo.
It involves breaking down tasks into smaller goals and learning the optimal solution through repeated training and feedback, storing information in various types of memory based on functions.Tool usage refers to the ability of an agent to utilize modular tools, Internet information retrieval, access to tools such as proprietary information sources or APIs.It is worth noting that most of this information is difficult to modify after pre-training.
Given this logic of AI Agent, we can imagine the infinite possibilities of combining Web3 and AI Agent.For example:
-
In current trading applications, the integrated AI Agent model can provide customers with a natural language interface, providing a variety of trading functions including price prediction, trading strategy, stop loss strategy, dynamic leverage adjustment, intelligent follow-up opinion leader, borrowing and other trading functions.
-
When executing a quantitative strategy, the strategy can be further broken down into subtasks and assigned to different AI Agents to execute.Collaborative AI agents can enhance privacy protection and enable real-time monitoring to prevent being exploited by opponents.
-
Many NPCs in blockchain-based games naturally align with AI agents.There are already projects that apply GPT to dynamically generate game character conversations.Future developments may go beyond preset text, create more realistic real-time NPC (or even digital human) interactions, and operate independently of player intervention.Stanford’s “virtual town” is a good example of such applications.
-
Although the current Web3+ AI Agent projects are mainly concentrated in the primary market or AI infrastructure and no killer consumer applications have appeared yet, the potential of a game-changing Web3+ AI project is huge.These projects have broad prospects in the future through various blockchain features such as distributed on-chain governance, zero-knowledge proof reasoning, model distribution and improved interpretability.
2.3.3 Potential vertical applications of Web3+AI
A. Applications in the field of education
The convergence of Web3 and artificial intelligence heralds a revolution in the field of education, where generative virtual reality classrooms are a notable innovation.Embed artificial intelligence technology into the online learning platform, and students can get a personalized learning experience.The system generates customized educational content based on each student’s learning history and interests.This personalized approach is expected to improve students’ learning motivation and efficiency and make education more personalized.
In addition, token-based credit incentives represent innovative practices in the field of education.Using blockchain technology, students’ credits and grades can be encoded into tokens to form a digital credit system.This incentive mechanism encourages active participation in learning activities and creates a more attractive and motivating learning environment.
Inspired by FriendTech, a recent popular SocialFi project, similar key pricing logic can be applied to establishing peer review systems among students to add more social elements to education.Taking advantage of the immutability of blockchain, peer reviews have become more fair and transparent.This peer review mechanism is not only conducive to cultivating students’ teamwork capabilities, but also provides a more comprehensive and multi-dimensional assessment of students’ performance, introducing diversified and holistic evaluation methods into the education system.
B. Application in the medical field
In the medical field, the integration of Web3 and AI has promoted federated learning and distributed reasoning.By combining distributed computing with machine learning, medical professionals can share data at a large scale, enabling deeper and more comprehensive group learning.This collective intelligent approach can accelerate the development of disease diagnosis and treatment plans and promote advances in the field of medicine.
Privacy protection is also an important aspect of application in the medical field.With the decentralization of Web3 and the immutability of blockchain, patient medical data can be stored and transferred more securely.Smart contracts can achieve precise control and permission management of medical data, ensuring that only authorized personnel can access sensitive patient information, thereby maintaining the privacy of medical data.
C. Applications in the field of insurance
In the insurance industry, the integration of Web3 and AI is expected to bring more efficient and intelligent solutions to traditional operations.For example, in automotive and home insurance, the application of computer vision technology helps insurers to more effectively assess property value and risk levels through image analysis and assessment.This provides insurance companies with more refined and personalized pricing strategies and enhances risk management in the insurance industry.
At the same time, on-chain automated claim processing is an innovative advance in the insurance industry.Using smart contracts and blockchain technology, the claim process becomes more transparent and efficient, reducing the possibility of tedious procedures and human intervention.This not only speeds up the claims process, but also reduces operating costs and provides a better experience for insurers and customers.
Dynamic premium adjustment is another area of innovation.Through real-time data analysis and machine learning algorithms, insurance companies can adjust premiums more accurately and timely, and personalize pricing based on the insured’s actual risk status.This approach not only makes premiums more fair, but also encourages the insured to take healthier and safer behaviors, promoting risk management and preventive measures throughout the society.
D. Applications in the field of copyright
In the copyright field, the combination of Web3 and artificial intelligence has introduced new paradigms for digital content creation, management, and code development.Through smart contracts and decentralized storage, copyright information of digital content can be better protected, making it easier for creators to track and manage their intellectual property rights.Blockchain technology can also establish transparent and immutable creative records, providing more reliable means for tracking and verifying works.
Innovation in work models also represents a major change in the copyright field.Collaborative work on token incentives combines work contributions with token rewards, encouraging creators, curators and developers to participate in the project.This not only promotes collaboration between creative teams, but also gives participants the opportunity to benefit directly from the success of the project, thus generating more excellent works.
On the other hand, the use of tokens as copyright proof reshapes the benefit distribution model.Through the dividend mechanism automatically executed by smart contracts, all participants in the work can obtain their own share of their income in real time when the work is used, sold or transferred.This decentralized benefit distribution model effectively solves the opacity and delay problems in the traditional copyright model, and provides creators with a more fair and efficient benefit distribution mechanism.
E. Applications in the virtual universe
In the metaverse, the convergence of Web3 and AI opens up new possibilities for creating low-cost AIGCs to enrich blockchain-based gaming content.The virtual environments and characters generated by AI can enrich game content, provide users with a more vivid and diverse experience, while reducing the labor and time costs of production.
Creating vivid digital humans is an innovation in the application of the metaverse.Digital humans have a detailed physical appearance that is thin to hair and a psychological logic built on large language models that can play various roles in the metaverse.They can interact with users and even participate in digital twins in real-world scenarios.This provides a more realistic and profound experience for virtual reality, promoting the widespread application of digital human technology in entertainment, education and other fields.
Automatically generating advertising content based on blockchain user portraits is an intelligent advertising application in the metaverse.By analyzing users’ behaviors and preferences in the metaverse, AI algorithms can create more personalized and attractive ads that can improve click-through rates and user engagement.This advertising creation method not only meets user interests, but also provides advertisers with more efficient promotion channels.
Generative interactive NFT is a compelling technology in the metaverse.By combining NFT with generative design, users can participate in creating their own NFT artwork in the metaverse, giving them interactivity and uniqueness.This opens up new possibilities for the creation and transaction of digital assets and promotes the development of digital art and virtual economy in the virtual universe.
three.Signature Web3 protocol
In this section, the authors selected five representative protocols to gain an in-depth understanding of the current status of generative AI in the Web3 field: Render Network and Akash Network are highlighted as leaders in the general AI infrastructure protocol and the AI category in Web3; BittensorDetermined as a popular project in the current model training field; Alethea.ai was selected for its close relevance to generative AI applications; Fetch.ai demonstrates the potential of AI agents in the decentralized Web3 world.
3.1 Rendering Network ($RNDR)
Render Network was founded in 2017 by Jules Urbach, founder of parent company OTOY.OTOY’s core business is cloud-based graphics rendering, advised by co-founders of Google and Mozilla, contributed to the Oscar-winning film project and worked with Apple.
Render Network is OTOY’s move into the Web3 field, aiming to leverage the distributed nature of blockchain to connect small-scale rendering and artificial intelligence requirements with decentralized resources.This move is designed to save costs for small studios that would otherwise rent expensive centralized computing resources (such as AWS, MS Azure, and Alibaba Cloud) and provide revenue-generating opportunities for those with idle computing resources.
Supported by OTOY, which released its proprietary renderer Octane Render, Render Network was soon considered a Web3 project with a solid foundation and potential, launching with its inherent needs and solid business model.
With the rise of generative AI, the demand for distributed verification and inference tasks has been increasing, perfectly fitting with Render’s technical architecture, making it a promising direction for future development.Render has been leading the AI track in the Web3 space, becoming a kind of meme-like entity, and it benefits from an upward trend whenever the narrative around AI, the metaverse and distributed computing heats up, demonstrating its versatility.
In February 2023, Render Network announced a roadmap to update its pricing hierarchy system and introduced a price stabilization mechanism for community voting for $RNDR (although the release date has not been announced yet).The project also announced the migration from Polygon to Solana (upgrade the $RNDR token to the $RENDER token based on Solana SPL, completed in November 2023).
The new pricing system of the rendering network divides on-chain services into three levels, from high to low, each level corresponds to different price points and rendering service quality.These layers provide customers with choices based on their specific rendering needs.
The $RNDR price stability mechanism for community voting has changed from irregular buybacks to the “destruction and coin equilibrium (BME)” model.This change emphasizes $RNDR as a stablecoin for trading, rather than long-term holding of assets.The specific business process of a BME Epoch is as follows:
-
Product creation: The product creator on Render, that is, the provider of rendering resources, packages idle rendering resources into products (nodes) and goes online, waiting for use.
-
Purchase the product: Customers with rendering needs can directly destroy the $RNDR token as service fee.If they don’t have $RNDR tokens, they first buy them on DEX in fiat currency and then destroy the tokens.The price paid by the service is publicly recorded on the blockchain.
-
Minting tokens: Mint new tokens according to preset rules.
Note: Render Network charges 5% of the project operation transaction fees paid by product buyers.
In each BME Epoch, a preset number of new tokens is minted (the quantity decreases over time).These new tokens are distributed to three parties:
-
Product Creators: The rewards they receive are:
A.Task completion: Rewards are based on the number of rendering tasks completed by each product node.
b. Online reward: Resource providers are encouraged to complete tasks online and rewards are given based on market standby time.
2. Product Buyer: Similar to the shopping mall product coupon return, buyers can receive up to 100% of $RNDR token return, encouraging future use of Render Network.
3. DEX Liquidity Provider: Partnering with providers in DEX to ensure that $RNDR tokens are offered at reasonable prices for necessary destruction, and will be rewarded based on the amount of $RNDR pledged.
Judging from the price trend of $RNDR in the past year, as the leading AI track project in Web3, $RNDR benefits from the AI boom driven by ChatGPT at the end of 2022 and early 2023.With the introduction of a new token mechanism, the price of $RNDR peaked in the first half of 2023.
After a period of stability, the price of $RNDR has reached its recent high with the AI recovery triggered by the new version of OpenAI, the migration of Render Network to Solana, and the expected implementation of the new token mechanism.Given that the fundamentals of $RNDR are small, future investments in $RNDR require careful position management and risk control.
Data from the Dune Analytics dashboard shows that the total number of rendering tasks has increased since the beginning of 2023, but the number of rendering nodes has not increased.This suggests that new users leading to increased workloads are those with rendering needs, rather than those providing rendering resources.
Given that generative AI will surge by the end of 2022, it is reasonable to infer that additional rendering tasks are relevant to generative AI applications.Whether this increase in demand represents a long-term trend or a temporary surge remains to be seen and needs further observation.
3.2 Akash Network ($AKT)
Akash Network is a decentralized cloud computing platform designed to provide developers and enterprises with more flexible, efficient and cost-effective cloud computing solutions.
The project’s “super cloud” platform is based on distributed blockchain technology and uses the decentralized characteristics of blockchain to provide users with global and decentralized cloud infrastructure, including diversified CPUs, GPUs, storage, etc.Computing resources.
Founded by Greg Osuri and Adam Bozanich, an entrepreneur with a rich background in project, Akash Network has a clear mission: reduce cloud computing costs, increase availability and give users greater control over their computing resources.By incentivizing providers to open up idle computing resources through bidding processes, Akash Network achieves more efficient resource utilization and provides competitive prices for resource demanders.
In January 2023, Akash Network launched the Akash Network Economics 2.0 update to address various flaws in the current token economy, including:
-
$AKT volatility in market prices leads to mismatch between long-term contract prices and values.
-
There is insufficient incentive to release a lot of computing power to resource providers.
-
Inadequate community incentives hinder the long-term development of the project.
-
Inadequate value capture of $AKT poses a risk to project stability.
According to the official website, the solutions proposed in the Akash Network Economy 2.0 plan include introducing stablecoin payments, increasing order fees to increase agreement revenue, enhancing incentives to resource providers, and increasing community incentives.It is worth noting that the stablecoin payment function and order fee function have been implemented.
As a native token for the Akash network, $AKT has a variety of uses in the protocol, including staking for verification (security), incentives, network governance, and payment of transaction fees.According to the official website, the total supply cap of $AKT is 388 million, and as of November 2023, about 229 million (59%) have been unlocked.Genesis tokens allocated at the start of the project will be fully unlocked and entered the secondary market in March 2023.The allocation of Genesis tokens is as follows:
Regarding value acquisition, one notable yet unimplemented feature mentioned in the white paper is that Akash plans to charge a “fee” for every successful lease.These fees will be sent to the revenue pool for distribution to the holder.
The plan stipulates a 10% fee for AKT transactions and a 20% fee for transactions using other cryptocurrencies.In addition, Akash intends to reward holders who lock in AKT holdings for a long time, thereby incentivizing long-term investment.
Price trends from CoinGecko show that $AKT experienced an uptrend in mid-August and late November 2023, although the gains were not as good as other projects in the AI space, which may be due to current market sentiment.
Overall, Akash Network is one of the few quality projects on the AI track, with fundamentals superior to most competitors.Its potential business revenue can bring future profitability to the protocol, and with the development of the AI industry and the increasing demand for cloud computing resources, Akash Network is expected to make significant progress in the next wave of artificial intelligence.
3.3 Bittensor ($TAO)
For those familiar with the $BTC technical architecture, understanding the design of Bittensor is very simple.In fact, when designing Bittensor, its creators drew inspiration from several features of cryptocurrency pioneer $BTC.
This includes a total token supply of 21 million, a production cut of about halved every four years, and a Proof of Work (PoW) consensus mechanism, etc.
To conceptualize it, imagine the initial Bitcoin production process and then replace the computationally intensive “mining” process with training and validation of AI models that do not create real-world value.Miners are motivated based on the performance and reliability of AI models.This forms a simple summary of the Bittensor ($TAO) project architecture.
Bittensor was founded in 2019 by AI researchers Jacob Steeves and Ala Shaabana based on a white paper written by mysterious author Yuma Rao.In short, it is an open source, license-free protocol that creates a network architecture connected by many subnets, each responsible for different tasks (machine translation, image recognition and generation, large language models, etc.).Excellent tasks are rewarded for completing them, and subnets can interact and learn from each other.
As for the current major artificial intelligence models, they are the result of a large amount of investment in computing resources and data by technology giants.While these AI products perform well, this approach also brings high centralized risks.
Bittensor’s infrastructure allows communication expert networks to interact and learn from each other, laying the foundation for decentralized training of large-scale models.Bittensor’s long-term vision is to compete with closed source models of giants such as OpenAI, Meta and Google, maintain decentralized characteristics while aspiring to match their inference performance.
The technical core of Bittensor is the consensus mechanism uniquely designed by Yuma Rao, also known as Yuma Consensus, which mixes elements of PoW and Proof of Stake (PoS).The supplier mainly involves “servers” (miners) and “verifiers” (verifiers), while the demander is composed of “clients” (customers) using models in the network.The process is as follows:
-
The client sends the request and data to the validator for processing.
-
The verifier distributes the data to miners under a specific subnet.
-
Miners use their model and received data to reason and return results.
-
Verifiers rank the inference results by quality and record them on the blockchain.
-
The best inference results are returned to the client, miners and validators receive rewards based on rankings and workloads.
It is worth noting that Bittensor itself does not train any model in most subnets; it is more like a bond between model providers and users, further improving the performance of various tasks through interactions between smaller models.Currently, there are (or already) 30 subnets online, each subnet corresponding to a different task model.
$TAO, as a native token for Bittensor, plays a crucial role in creating subnets, registering in subnets, paying for service fees, and staking validators within the ecosystem.Following the spirit of BTC, $TAO chose to release fairly, which means that all tokens are generated through contributions to the network.
Currently, $TAO produces about 7,200 tokens per day, evenly distributed to miners and validators.Since the project started, approximately 26.3% of the 21 million tokens have been produced, of which 87.21% are used for staking and verification.The project also follows the BTC’s production halving model approximately every four years, with the next halving schedule scheduled for September 20, 2025 and is expected to be a significant price driver.
Starting in late October 2023, the price trend of $TAO has seen a sharp rise, mainly driven by a new wave of AI enthusiasm after the OpenAI conference and the shift toward the AI field.
As a new project in the Web3+AI track, $TAO’s quality and long-term vision have also attracted investment.But it must be admitted that, like other AI projects, although the combination of Web3+AI has great potential, it has not yet been found in actual business to support long-term profitable projects.
3.4 Alethea.ai ($OR)
Founded in 2020, Alethea.ai is a project dedicated to using blockchain technology to bring decentralized ownership and governance to generate content.
The founders of Alethea.ai believe that generative AI will lead us into an era of information redundancy caused by generative content, in which large amounts of digital content can be easily copied or generated with a simple copy-paste or click.But it is difficult for original creators to gain benefits.By connecting blockchain primitives such as NFTs with generative AI, their goal is to ensure ownership of generative AI and its content and to engage in community governance on top of it.
Driven by this concept, Alethea.ai initially launched the new NFT standard iNFT, which uses Intelligence Pods to embed AI animation, speech synthesis and even generative AI into images.Alethea.ai also worked with artists to create iNFTs using their artwork, one of which was sold for $478,000 at Sotheby’s auction.
Alethea.ai then introduced the AI protocol that allows any generative AI developer or creator to create using the iNFT standard without permission.To demonstrate the AI protocol, Alethea.ai developed CharacterGPT, a tool based on large model theories such as GPT for creating interactive NFTs.Recently, they released Open Fusion, allowing any ERC-721 NFT to be combined with Intelligence and released on the AI protocol.
The native token of Alethea.ai is $ALI, which has four main uses:
-
Lock a certain amount of $ALI to create iNFT.
-
The more $ALIs are locked, the higher the level of the intelligence pod.
-
$ALI holders participate in community governance.
-
$ALI is a credential for interaction between iNFTs (no actual use cases yet).
Judging from the use case of $ALI, it is clear that the value capture of tokens is still based primarily on narratives.Token price trends over the past year have confirmed this: $ALI benefited from the generative AI boom that ChatGPT has led since December 2022.Additionally, when Alethea.ai announced its new Open Fusion feature in June, it triggered a price surge.However, aside from these situations, the price of $ALI has been on a downward trend, and has not even reacted to the end-2023 AI hype like similar projects.
In addition to native tokens, Alethea.ai’s NFT project (including its official collection) is also worthy of attention in the NFT market.
According to the Dune dashboard, third-party Intelligence Pods and Alethea.ai’s first-party Revenants series faded out of sight after its first release.The author believes that the main reason is that the initial novelty gradually fades, and there is no substantial value or community participation to retain users.
3.5 Fetch.ai ($FET)
Fetch.ai is a project dedicated to promoting the integration of artificial intelligence (AI) and blockchain technology.Its goal is to build a decentralized smart economy by combining machine learning, blockchain and distributed ledger technologies to support economic activity among smart agents.
Founded in 2019 by British scientists Humayun Sheikh, Toby Simpson and Thomas Hain, Fetch.ai has an impressive background in its founding team.
Humayun Sheikh was an early investor in DeepMind, Toby Simpson has held executive positions at several companies, and Thomas Hain is a professor in the field of artificial intelligence at the University of Sheffield.The founder’s diversified experience spans traditional IT companies, blockchain star projects, medical care, and supercomputing fields, providing Fetch.ai with rich industry resources.
Fetch.ai’s mission is to build a decentralized network platform composed of autonomous economic agents (AEAs) and artificial intelligence applications, allowing developers to complete preset goal-oriented tasks by creating autonomous agents.The core technology of the platform is its unique three-layer architecture:
-
Underlying the PoS-uD (licensed proof of stake) consensus mechanism, this basic layer supports smart contract networks, promotes miner collaboration, and basic machine learning training and reasoning.
-
Middle layer: The Open Economic Framework (OEF) provides a shared space for interactions and underlying protocols between AEAs, supporting search, discovery and transactions between AEAs.
-
Top Level: AEA is the core component of Fetch.ai.Each AEA is an intelligent proxy software that can perform various functions through skill modules and perform predefined tasks by users.These agents do not run directly on the blockchain, but interact with the blockchain and smart contracts through OEF.Smart proxy software can be purely based on software or can be bound to physical hardware such as smartphones, computers, and cars.Fetch.ai provides the Python-based development kit AEA framework, which is modular, enabling developers to build their smart proxy.
On top of this architecture, Fetch.ai launched subsequent products and services such as Co-Learn (shared machine learning model between agents) and Metaverse (agent cloud hosting service) to support users in developing agents on their platforms.
Regarding tokens, $FET, as the native token of Fetch.ai, covers standard functions such as payment of Gas, verification of stakes, and purchasing services within the network.More than 90% of the $FET tokens have been unlocked, and the specific allocation is as follows:
Since its inception, Fetch.ai has experienced multiple token dilution financing rounds, the most recent being a $30 million investment received from DWF Lab on March 29, 2023.Given that $FET tokens cannot derive value from project revenue, its price momentum depends mainly on project updates and market sentiment towards the field of artificial intelligence.In fact, in the midst of two waves of boom in the AI market, Fetch.ai’s price experienced a surge of more than 100% in early 2023 and end of the year.
Fetch.ai’s development trajectory is more like a Web2.0 AI startup with a focus on improving its technology.It seeks recognition and profitability through ongoing fundraising and extensive collaboration.
This approach leaves ample room for future applications developed on Fetch.ai, but also means it may not be as attractive to other blockchain projects, potentially limiting the vitality of the ecosystem.One of the founders of Fetch.ai even tried to launch a DEX project, Mettalex DEX, based on Fetch.ai, but it ended in failure.As an infrastructure-focused project, the decay of the ecosystem has also hindered the growth of Fetch.ai’s intrinsic value.
Four.A bright future for generative artificial intelligence
NVIDIA CEO Jensen Huang compared the launch of generated large models to the “iPhone moment” of artificial intelligence, indicating a key shift in the role of artificial intelligence, with high-performance computing chips becoming the core of a scarce resource for artificial intelligence.
AI infrastructure projects that lock most of the funds in the Web3 AI sub-track are still the focus of investors’ long-term attention.As chip giants gradually upgrade their computing capabilities, AI’s capabilities will expand, and it is likely to spawn more AI infrastructure projects in Web3, and perhaps even chips designed specifically for AI training in Web3.
While consumer-centric generative AI products are still in the experimental stage, some industrial-grade applications have shown great potential.One of the applications is the “digital twin” that transfers real-world scenarios to the digital field.
Taking into account the untapped value in industrial data, NVIDIA’s metacosmic digital twin platform positions generative AI as an important part of industrial digital twins.In Web3, including virtual worlds, digital content creation and real-world assets, digital twins influenced by artificial intelligence will play an important role.
The development of new interactive hardware is also crucial.Historically, every hardware innovation in the field of computing has brought about revolutionary changes and opportunities, such as the now ubiquitous computer mouse or the multi-touch capacitor screen of the iPhone 4.
Apple Vision Pro announced that it will be released in the first quarter of 2024, attracting global attention with its impressive demonstrations and is expected to bring unexpected changes and opportunities to various industries.The entertainment industry, known for its rapid content production and widespread dissemination, often benefits from hardware updates first.These include Web3’s metaverse, blockchain games, NFT, etc., which are worthy of long-term attention and research.
In the long run, the development of generative artificial intelligence represents quantitative change leading to qualitative change.At the heart of ChatGPT is the solution to the long-term researched academic problem of reasoning Q&A.Only through extended data and model iterations can the impressive level of GPT-4 be reached.AI applications in Web3 are similar, and are currently in the stage when the Web2 model adapts to Web3.A model that is entirely based on Web3 data has not yet appeared.Visionary projects and a vast array of resources dedicated to researching Web3-specific issues will bring Web3 its own ChatGPT-level killer application.
There are many promising ways to explore the technical foundation of generative artificial intelligence, such as thought chain technology.This technology allows large language models to make a significant leap in multi-step reasoning.However, it also highlights and even exacerbates the limitations of large models in complex logical reasoning.Interested readers can explore the original author’s paper on Chain-of-Thought.
After ChatGPT, various GPT-themed projects appeared in Web3, but simply combining GPT with smart contracts cannot meet user needs.About a year after ChatGPT is released, there is still huge potential in the future.Future products should start from the real needs of Web3 users.With the increasing maturity of Web3 technology, the application of generative AI in Web3 will surely be broad and exciting.