AI on AO Press Conference: Three major AI technologies breakthroughs in the AO protocol

Author: Kyle_13, Source: Author Twitter @kylewmi

Thank you for your arrival today.We have a series of super exciting developments about AO technology to share with you.We’ll do a demo first, and then Nick and I will try to build an AI agent here that will use a large language model in a smart contract, buying and selling based on the chat sentiment you’re about to hear in the system.Today we will build it from scratch on site and hope everything goes well.

Yes, you will see how to do it all on your own.

The technological advancements here really make AO far surpass other smart contract systems.This was already true before, and now it is becoming more and more like a decentralized supercomputer than a traditional smart contract network.But it has all the features of a smart contract network.Therefore, we are very excited to share all this with you.Without further ado, let’s start the demo, and we’ll have a discussion, and we’ll build something on site together.

Hello everyone, thank you for your arrival today.We are very excited to announce three major technical updates to the AO Agreement.Together, they achieve a big goal of supporting large language models running in decentralized environments as part of smart contracts.These are not just toy models, small models, or models compiled into their own binary files.

This is a complete system that allows you to run nearly all the main models currently open source and available.For example, Llama 3 runs on the chain in smart contracts, the same is true for GPT, as well as Apple’s models, etc.This is the result of the joint efforts of the entire ecosystem, and three major technological advancements also form part of this system.Therefore, I am very excited to introduce all this to you.

The general situation is that LLM (large language model) can now run in smart contracts.You may have heard of decentralized AI and AI cryptocurrencies many times.In fact, except for a system we are going to discuss today, almost all of these systems are AIs as oracles, that is, run AI off-chain and then put the execution results on the chain for some downstream purposes..

We are not talking about this.We are talking about large language model reasoning as part of smart contract state execution.All this is thanks to the AO hard drive we have and the hyperparallel processing mechanism of AO, which means you can run a lot of computations without affecting the different processes I’m using.We believe that this will allow us to create a very rich decentralized autonomous agency financial system.

So far, in decentralized finance (DeFi), we have basically been able to make the execution of original transactions trustless.Interactions in different economic games, such as borrowing and exchange, require no trust.This is just one aspect of the problem.If you consider global financial markets.

Yes, there are a variety of different economic originals that work in different ways.There are bonds, stocks, commodities and derivatives, etc.But when we really talk about the market, it’s not just that, it’s actually the smart layer.It is the person who decides to buy, sell, borrow or participate in various financial games.

So far, in the decentralized financial ecosystem, we have successfully transferred all these originals to a trustless state.Therefore, you can exchange on Uniswap without trusting the operators of Uniswap.In fact, fundamentally, there are no operators.The intelligent layer of the market is left behind.So, if you want to participate in cryptocurrency investments and don’t want to do all your research and engagement yourself, you have to find a fund.

You can trust their funds, and then they go and perform smart decisions and pass them downstream to the basic original execution of the network itself.We believe that in AO, we actually have the ability to transfer the intelligent part of the market, that is, the intelligence that leads to decision making, to the network itself.So, a simple way to understand it might be to imagine it.

A hedge fund or portfolio management application you can trust can execute a set of smart instructions within the network, thus transferring the network’s trustlessness into the decision-making process.This means an anonymous account, such as the Yolo 420 Trader Number One (a bold, casual trader), can create a new and interesting strategy and deploy it to the network, where you can put capital into it without real trust.

You can now build autonomous agents that interact with large statistical models.The most common large statistical model is the large language model that can process and generate text.This means you can put these models into smart contracts as part of a strategy developed by someone with novel ideas and execute them intelligently in the network.

You can imagine doing some basic sentiment analysis.For example, if you read the news, you decide that this is a good time to buy and sell this derivative.This is a good time to do one or another.You can make decisions similar to humans perform in a trustless way.This is not just a theory.We created an interesting meme coin called Llama Fed.Basically the idea is that it is a fiat currency simulator where a bunch of llamas (alpacas) are represented by the Llama 3 model.

They are like a combination of llama and the Fed chairman, you can go find them and ask them to issue some tokens and they will evaluate your request.Large language models themselves operate monetary policy, completely autonomous and without trust.We built it, but we couldn’t control it.They operate monetary policies and determine who should get the tokens and who should not.This is a very interesting little application of this technology and hopefully inspires all other possible applications in the ecosystem.

To achieve this, we must create three new basic capabilities for AO, some at the base protocol layer and others at the application layer.This is not only useful for the execution of large language models, but is more extensive and exciting for AO developers.So I’m glad to introduce these to you today.

The first of these new technologies is Web Assembly 64-bit support.It sounds a bit like a technical term, but I have a way to make everyone understand what it means.Fundamentally, Web Assembly 64 support allows developers to create applications that use more than 4GB of memory.We’ll introduce the new restrictions later, they’re pretty amazing.

If you are not a developer, you can understand it this way: someone asked you to write a book and you are excited about the idea, but they say you can only write 100 pages.No more or less.You can express the ideas in the book, but it cannot be done in a natural and normal way, because there are external restrictions and you have to cater to it and change the way you write to fit it.

In the smart contract ecosystem, this is more than just a 100-page limit.I would say it’s a bit like building in earlier versions of AO.Ethereum has a 48KB memory limit, just like someone asked you to write a book that is only one sentence long, and you can only use the top 200 most popular English words.Building a truly exciting application in this system is extremely difficult.

Then there is Solana, where you can access 10MB of working memory.This is obviously an improvement, but basically we are talking about one page.ICP, Internet computer protocol, allows support for 3GB of memory.It’s complete in theory, but they had to lower it to 3GB.Now with 3GB of memory, you can run a lot of different applications, but you certainly can’t run large AI applications.They require loading large amounts of data into main memory for quick access.This cannot be effectively implemented in 3GB of memory.

When we released AO in February this year, we also had a 4GB memory limit, which actually originated from Web Assembly 32.Now, this memory limit has completely disappeared at the protocol level.Instead, the protocol-level memory limit is 18EB (exabytes).This is a huge amount of storage.

It will take quite a while until this is used in memory for calculations rather than long-term storage media.At the implementation level, the computing unit in the AO network, now has access to 16GB of memory, but in the future it will be relatively easy to replace it with a larger capacity memory without changing the protocol.16GB is enough to run large language model calculations, which means you can download and execute 16GB models on AO today.For example, the unquantitative version of Falcon 3 of Llama 3 and many other models.

This is a core component necessary to build a basic computing system for intelligent languages.Now that it is fully supported on-chain as part of a smart contract, we think it’s very, very exciting.

This eliminates a major computational limitation of AO and subsequent smart contract systems.When we released AO in February this year, you may notice that we mentioned many times in the video that you have unlimited computing power, but there is a limit, that is, you cannot exceed 4GB of memory.This is the lifting of that restriction.We think this is a very exciting advancement, and 16GB is enough to run almost every model you want to run in the current AI field.

We were able to increase the 16GB limit without changing the protocol, which will be relatively easy in the future, a big step compared to initially running Web Assembly 64.Therefore, this is itself a huge improvement in system capabilities.The second major technology that enables large language models to run on AO is WeaveDrive.

WeaveDrive allows you to access Arweave data in AO like a local hard drive.This means you can open any transaction ID in the AO that is authenticated by the scheduling unit and upload it to the network.Of course, you can access this data and read it into your program, just like a file on your local hard drive.

We all know that there are about 6 billion transactional data stored on Arweave at present, so this is a huge dataset starting point.This also means that in the future, the motivation to upload data to Arweave increases, as this data can also be used in AO programs.For example, when we get a large language model to run on Arweave, we upload a model worth about $1,000 to the network.But this is just the beginning.

With a smart contract network with a local file system, the number of applications you can build is huge.So, it’s very exciting.Even better, we built a system that allows you to stream data to the execution environment.It’s a technical nuance, but you can imagine going back to the book’s analogy.

Someone told you that I want to access one of the data in your book.I want to get a chart from this book.In a simple system, even in the current smart contract network, this would be a huge improvement and you would give the entire book.However, this is obviously inefficient, especially if that book is a large statistical model with thousands of pages.

This is extremely inefficient.Instead, what we do in AO is that you can read bytes directly.You go directly to the chart position in the book, just copy the chart into your application and execute it.This greatly improves the efficiency of the system.This is not only a minimally viable product (MVP), it is a fully functional and well-built data access mechanism.So you have an infinite computing system and an infinite hard drive, combine them together and you have a supercomputer.

This has never been built before and now it is available to everyone at the lowest cost.That’s where AO is and we’re very excited about it.The implementation of this system is also at the operating system level.So we turn WeaveDrive into a subprotocol of AO, which is a computing unit extension that anyone can load.This is interesting because this is the first extension of this kind.

AO has always had the ability to add scaling to your execution environment.Just like you have a computer, you want to plug in more memory, or insert a graphics card, you physically put a unit into the system.You can do this on the AO’s compute unit, and that’s what we do here.So, at the operating system level, you now have a hard disk that just represents the file system that data storage.

This means that not only can you access this data in AO, building the application the usual way, but you can actually access it from any application brought to the network.Therefore, this is a widely applicable capability that can be accessed by all people built in the system, regardless of the language they are written in, Rust, C, Lure, Solidity, etc., as if it were native features of the system.In the process of building this system, it also forces us to create subprotocol protocols, create methods for other computing unit extensions so that others can build exciting things in the future.

Now we have the ability to run computations in a centralized memory of any size and load data from the network into processes within AO. The next question is how to reason itself.

Since we chose to build AO on Web Assembly as its primary virtual machine, it is relatively easy to compile and run existing code in that environment.Since we built WeaveDrive to expose it to an operating system-level file system, it is actually relatively easy to run Llama.cpp (an open source large language model inference engine) on the system.

This is very exciting because it means you can run not only this reasoning engine, but also many others easily.Therefore, the last component that makes a large language model run within AO is the large language model inference engine itself.We ported a system called Llama.cpp, which sounds a bit mysterious, but it is actually the leading open source model execution environment at present.

Running directly within the AO smart contract, it is actually relatively easy once we have the ability to have as much data as we have in the system and then loading as much data from Arweave.

To achieve this, we also work with a computing extension called SIMD (Single Instruction Multi-Data) that allows you to run these models faster.So we also enabled this feature.This means that these models currently run on the CPU, but are quite fast.If you have asynchronous calculations, it should be suitable for your use scenario.Like reading a news signal and then deciding which transactions to execute, it works well under the current system.But we also have some exciting upgrades that will be discussed soon, regarding other acceleration mechanisms, such as using GPU to accelerate large language model inference.

Llama.cpp allows you to load not only Meta’s leading model Llama 3, but also many other models, which are actually about 90% or more of the models you can download from the open source model website Hugging Face can run within the system, from GPT-2If you want, to 253 and Monet, Apple’s own large language model system and many other models.So we now have the framework to upload any model from Arweave, using the hard drive to upload the model I want to run in the system.You upload them, they are just normal data, and then you can load them into the AO process and execute them, get the results and work the way you like.We think this is a package that makes it possible for applications that were not possible in the previous smart contract ecosystem, and even if it is possible now, the number of architectural changes in existing systems such as Solana is just unpredictable and not on its routeOn the picture.So to show you this and make it real and understandable, we created an emulator, Llama Fed.The basic idea is that we get a Fed member committee, they are llama, both in both as the Meta llama 3 model and as the Fed Chairman.

We also tell them they are llamas, like Alan Greenspan or the Fed chairman.You can enter this small environment.

Some people will be familiar with the environment, and it is actually like the Gather we work today, you can talk to llama and ask them to give you some tokens for a very interesting project, and they will decide whether to give to you based on your request.Tokens.So you burn some Arweave tokens, wAR tokens (provided by the AOX team) and they will give you tokens based on whether your proposal is considered good.Therefore, this is a meme currency, and the monetary policy is completely autonomous and intelligent.While this is a simple form of intelligence, it is still fun.It will evaluate your proposals and others’ proposals and run monetary policy.All of this can now be achieved within a smart contract by analyzing news headlines and making smart decisions or interacting with customer support and returning value.Elliot will now show it to you.

Hello everyone, I’m Elliot, and today I’m going to show you Llama Land, an on-chain autonomous world running in AO, powered by Meta’s open source Llama 3 model.

The conversation we see here is not just a conversation between players, but also a completely autonomous number llama.

For example, this llama is human.

But this llama is on-chain AI.

The building contains Llama fed.It’s like the Fed, but serves llama.

Llama fed runs the world’s first AI-powered monetary policy and mints Llama tokens.

This guy is King Llama.You can provide him with the packaged Arweave token (wAR) and write a request to get some Llama tokens.

Llama King AI will evaluate and decide whether to grant Llama tokens.Llamafed’s monetary policy is completely autonomous and has no human supervision.Every agent and every room in the world itself is an on-chain process on AO.

It looks like King Llama has granted us some tokens and if I look at my ArConnect wallet I can see that they are already there.good.Llama Land is just the first AI-driven world to be implemented on AO.This is a framework for a new protocol that allows anyone to build their own autonomous world, the only limitation is your imagination.All of this is 100% on-chain and is only possible on AO.

Thanks Elliot.What you just saw is not just a large language model participating in financial decision-making and running an autonomous monetary policy system.Without a backdoor, we can’t control it, all of which are run by the AI ​​itself.You also see a small universe, a place where you can walk in physical space, where you can go to interact with financial infrastructure.We think it’s not just an interesting little demonstration.

There are actually something very interesting here, where different people who use financial products are brought together.We see in the DeFi ecosystem that if someone wants to participate in a project, they first check it on Twitter, then visit the website, and participate in the basic originals in the game.

They then join a Telegram group or Discord channel or talk to other users on Twitter.This experience is very decentralized and we all jump between different applications.An interesting idea we are trying is if you have the user interface of these DeFi applications, so that their communities can come together and jointly manage this autonomous space they collectively access, because it is a permanent web application that can be joined byExperience.

Imagine you could go to a place that looks like an auction house and chat with other users who like the protocol.When there is activity in the financial mechanism process that occurs on AO, you can basically chat with other users.The community and social aspects are combined with the financial part of the product.

We think this is very interesting and even has a wider impact.Here you can build an autonomous AI agent that wanders around this Arweave world, interacting with the different applications and users it discovers.So if you are building a meta universe, when you create an online game, the first thing is to create an NPC (non-player character).Here, NPCs can be universal.

You have an intelligent system that wanders around and interacts with the environment, so you don’t have user cold start issues.You can have some autonomous agents trying to make money for yourself, trying to make friends, and interact with the environment like normal DeFi users.We thought it was very interesting, though a little weird.We will wait and see.

Going forward, we also see opportunities to accelerate large language model execution in AO.Earlier I talked about the concept of computing unit extension.This is how we use to build WeaveDrive.

Not just staying at WeaveDrive, you can build any type of extension for AO’s computing environment.There is a very exciting eco-project that is solving this problem for GPU accelerated execution of large language models, which is Apus Network.I’ll let them explain.

Hi, I’m Mateo.Today I am excited to introduce Apus Network.Apus Network is committed to building a decentralized, trustless GPU network.

We use AO and APUS tokens by leveraging Arweave’s permanent on-chain storage, providing an open source AO extension, providing a deterministic execution environment for GPUs, and providing an economic incentive model for decentralized AI, using AO and APUS tokens.Apus Network will use GPU mining nodes to compete to perform optimal, trustless model training running on Arweave and AO.This ensures that users can use the best AI model at the most cost-effective price.You can follow us on X (Twitter) @apus_network.Thanks.

This is the current situation of AI on AO today.You can try Llama Fed and try to build your own smart contract application based on large language models.We believe this is the beginning of introducing market intelligence to a decentralized execution environment.We are very excited about this and look forward to seeing what will happen next.Thank you for your participation today and look forward to communicating with you again.

  • Related Posts

    Berachain announces $632 million BERA airdrop plan and launches main network

    Author: Tom Mitchelhill, CoinTelegraph; Compiled by: Tao Zhu, Bitchain Vision The Berachain Foundation, an organization supporting the Liquidity Proof Layer 1 blockchain, has announced its airdrop plan for native BERA…

    VC coins airdrop and a brilliant technology dream

    Source: X, Rui@YueiZhang In the cycle from 2022 to 2024, the model of strong VC endorsement + large airdrop to “community” + first-time coins has become the mainstream trend in…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Historic Trend: Bitcoin is Being a Safe-Habiting Asset

    • By jakiro
    • April 19, 2025
    • 14 views
    Historic Trend: Bitcoin is Being a Safe-Habiting Asset

    What makes cryptocurrency rug pull events happen frequently?

    • By jakiro
    • April 18, 2025
    • 14 views
    What makes cryptocurrency rug pull events happen frequently?

    Wintermute Ventures: Why do we invest in Euler?

    • By jakiro
    • April 18, 2025
    • 13 views
    Wintermute Ventures: Why do we invest in Euler?

    Can Trump fire Powell?What economic risks will it bring?

    • By jakiro
    • April 18, 2025
    • 13 views
    Can Trump fire Powell?What economic risks will it bring?

    Glassnode: Are we experiencing a bull-bear transition?

    • By jakiro
    • April 18, 2025
    • 15 views
    Glassnode: Are we experiencing a bull-bear transition?

    The Post Web Accelerator’s first batch of 8 selected projects

    • By jakiro
    • April 17, 2025
    • 28 views
    The Post Web Accelerator’s first batch of 8 selected projects
    Home
    News
    School
    Search