Reddio Technology Overview: A Narrative Overview from Parallel EVM to AI

Author: Wuyue, Geek web3

Today, blockchain technology is becoming faster and faster, optimization of performance has become a key issue. The Ethereum roadmap is clearly centered on Rollup, and the characteristics of EVM serial processing transactions are a shackle.Satisfies the future high-concurrency computing scenarios.

In previous articles—“Looking at the Optimization of Parallel EVM from Reddio”In this article, we have briefly summarized Reddio’s parallel EVM design ideas, and in today’s article, we will provide a more in-depth interpretation of its technical solutions and its combined scenarios with AI.

Since Reddio’s technical solution adopts CuEVM, this is a project that uses GPU to improve the execution efficiency of EVM, we will start with CuEVM.

CUDA Overview

CuEVM is a project that accelerates EVM with GPUs, which converts the opcode of Ethereum EVM into CUDA Kernels for parallel execution on NVIDIA GPUs.Through the parallel computing power of the GPU, the execution efficiency of EVM instructions is improved.Maybe N card users will often hear the word CUDA—

Compute Unified Device Architecture, which is actually a parallel computing platform and programming model developed by NVIDIA.It allows developers to use the parallel computing power of GPUs to perform general computing (such as mining in Crypto, ZK operations, etc.),Not only limited to graphics processing.

As an open parallel computing framework, CUDA is essentially an extension of the C/C++ language, and any underlying programmer who is familiar with C/C++ can quickly get started.A very important concept in CUDA is Kernel (kernel function), which is also a C++ function.

But unlike regular C++ functions that are executed only once, these kernel functions are started syntax<<<...>>>When called, N different CUDA threads will be executed N times in parallel.

Each thread of CUDA is assigned an independent thread ID, and adopts a thread hierarchy, assigning threads into blocks and grids to facilitate the management of a large number of parallel threads.Through NVIDIA’s nvcc compiler, we can compile CUDA code into programs that can run on the GPU.

Basic workflow of CuEVM

After understanding a series of basic concepts of CUDA, you can take a look at the workflow of CuEVM.

The main entrance of CuEVM is run_interpreter. From here, in the form of a json file, enter the transaction to be processed in parallel.From the project use cases, we can see that all inputs are standard EVM content, and there is no need for developers to process, translate, etc.

As you can see in run_interpreter(), it calls the kernel_evm() kernel function using the CUDA-defined <…>> syntax.As we mentioned above, kernel functions are called in parallel in the GPU.

evm->run() will be called in the kernel_evm() method. We can see that there are a large number of branch judgments to convert the EVM opcode into CUDA operations.

Taking the addition opcode OP_ADD in EVM as an example, you can see that it converts ADD into cgbn_add.CGBN (Cooperative Groups Big Numbers) is CUDA’s high-performance multi-precision integer arithmetic operation library.

These two steps convert the EVM opcode into CUDA operation.It can be said that CuEVM is also an implementation of all EVM operations on CUDA.Finally, the run_interpreter() method returns the operation result, that is, the world state and other information.

At this point, the basic operating logic of CuEVM has been introduced.

CuEVM has the ability to process transactions in parallel, but the purpose of CuEVM project establishment (or the main use cases) is used for Fuzzing testing:Fuzzing is an automated software testing technology that identifies potential errors and security issues by inputting large amounts of invalid, unexpected, or random data into a program to observe the program’s response.

We can see that Fuzzing is very suitable for parallel processing.CuEVM does not handle transaction conflicts and other issues, which is not the issue it is concerned about.If you want to integrate CuEVM, you also need to handle conflicting transactions.

Our previous article“Looking at the Optimization of Parallel EVM from Reddio”The conflict handling mechanism used by Reddio has been introduced in this article, and will not be discussed here.After Reddio sorts transactions with conflict processing mechanism, it can be sent to CuEVM.In other words, Reddio L2’s transaction sorting mechanism can be divided into two parts: conflict processing + CuEVM parallel execution.

Layer2, parallel EVM, three-way intersection of AI

As mentioned earlier, parallel EVM and L2 are just the starting point of Reddio, and its future roadmap will clearly combine it with AI narrative.Reddio, which uses GPU for high-speed parallel transactions, is naturally suitable for AI operations in many features:

  • GPUs have strong parallel processing capabilities and are suitable for performing convolutional operations in deep learning, which are essentially large-scale matrix multiplications, and GPUs are optimized for such tasks.

  • The thread hierarchical structure of GPU can match the correspondence between different data structures in AI computing, improving computing efficiency and masking memory latency through thread over-provisioning and Warp execution units.

  • Computational strength is a key indicator for measuring AI computing performance. GPU optimizes computing strength, such as introducing Tensor Core, to improve the performance of matrix multiplication in AI computing, and achieves an effective balance between computing and data transmission.

So how do we combine AI and L2?

We know that in Rollup’s architecture design, the entire network is not just a sorter, but also some roles similar to supervisors and forwarders to verify or collect transactions. They essentially use the same as the sorterThe client just assumes different functions.In traditional Rollup, the functions and authority of these secondary roles are very limited. For example, the role of watcher in Arbitrum is basically passive, defensive and public welfare, and its profit model is also questionable.

Reddio will adopt a decentralized sorter architecture, and miners provide GPU as nodes.The entire Reddio network can evolve from a pure L2 to an L2+AI comprehensive network, which can well implement some AI+ blockchain use cases:

The basic interactive network of AI Agent

With the continuous evolution of blockchain technology, AI Agent has great potential for application in blockchain networks.Let’s take the AI ​​Agent that executes financial transactions as an example. These intelligent agents can independently make complex decisions and execute transaction operations, and can even respond quickly under high-frequency conditions.However, L1 is basically impossible to carry a huge transaction load when handling such intensive operations.

As an L2 project, Reddio can greatly improve transaction parallel processing capabilities through GPU acceleration.Compared with L1, L2, which supports parallel transaction execution, has a higher throughput and can efficiently handle high-frequency transaction requests from a large number of AI Agents to ensure smooth operation of the network.

In high-frequency trading, AI Agents have extremely strict requirements on transaction speed and response time.L2 reduces the verification and execution time of transactions, thereby significantly reducing latency.This is crucial for AI Agents that require responses at milliseconds.By migrating a large number of transactions to L2, the congestion problem on the main network has also been effectively alleviated.Making AI Agents more cost-effective.

With the maturity of L2 projects such as Reddio, AI Agent will play a more important role in blockchain and promote innovation in combining DeFi and other blockchain application scenarios with AI.

Decentralized computing power market

In the future, Reddio will adopt a decentralized sorter architecture, and miners will use GPU computing power to determine the sorting rights. The overall GPU performance of network participants will gradually improve with competition, and can even reach the level used for AI training.

Build a decentralized GPU computing power market to provide lower-cost computing power resources for AI training and reasoning.From small to large computing power, from personal computers to computer room clusters, GPU computing power of all levels can join the market to contribute their idle computing power and earn profits. This model can reduce the cost of AI computing and allow more people to participate.AI model development and application.

In the use cases of the decentralized computing power market, the sorter may not be mainly responsible for the direct computing of AI. Its main functions are to process transactions, and the other is to coordinate AI computing power throughout the entire network.As for computing power and task allocation, there are two modes:

  • Top-down centralized allocation.Because of the sorter, the sorter can assign the received computing power requests to nodes that meet the needs and have a good reputation.Although this distribution method has theoretically centralized and unfair problems, in fact, the efficiency advantages it brings far exceed its disadvantages. In the long run, the sorter must meet the positiveness of the entire network in order to develop in the long run, that is, there isImplicit but direct constraints ensure that the sorter does not have too serious bias.

  • Bottom-up spontaneous task selection.Users can also submit AI operation requests to third-party nodes, which is obviously more efficient in specific AI application fields than submitting them directly to the sorter, and can also prevent the sorter’s review and bias.After the calculation is completed, the node synchronizes the calculation results to the sorter and puts them on the link.

We can see that in the L2 + AI architecture, the computing power market has extremely high flexibility, and can gather computing power from two directions to maximize resource utilization.

On-chain AI reasoning

At present, the maturity of the open source model is enough to meet diverse needs.With the standardization of AI reasoning services,Explore how to put computing power on the chain to achieve automated pricing.However, this requires overcoming several technical challenges:

  1. Efficient request distribution and recording:Large model reasoning requires high latency and efficient request distribution mechanism is very critical.Although the data of requests and responses is huge and private, and should not be exposed on the blockchain, it is also necessary to find a balance between recording and verification – for example, by storing hash.

  2. Verification of computing power node output:Did the node truly complete the formulated computing tasks?For example, node false reporting uses small model calculation results instead of large models.

  3. Smart contract reasoning: It is necessary to use AI models in combination with smart contracts for computing in many scenarios.Since AI reasoning is uncertain and cannot be used for all aspects of the chain, the logic of future AI dApps is likely to be partly located off-chain and the other part is on-chain contracts. The on-chain contract is valid for the input provided off-chain.and numerical legality are limited.In the Ethereum ecosystem, combining with smart contracts must face the inefficient seriality of EVM.

But in Reddio’s architecture, these are relatively easy to solve:

  1. The sorter is far more efficient than L1, and can be considered to be equivalent to Web2’s efficiency.As for the recording location and retention method of data, various cheap DA solutions can be solved.

  2. The calculation results of AI can be finally verified by ZKP.The characteristic of ZKP is that verification is very fast, but proof generation is slow.The generation of ZKP happens to be able to use GPU or TEE acceleration.

  3. Solidty → CUDA → GPU The EVM parallel main line is the basis of Reddio.So on the surface this is the simplest question for Reddio.Currently Reddio is working with AiI6z’s eliza to introduce its modules to Reddio, which is a very worthwhile direction to explore.

Summarize

Overall, the fields of Layer2 solutions, parallel EVM and AI technology seem to be irrelevant, but Reddio cleverly combines these major innovative fields by making full use of the computing characteristics of GPUs.

By leveraging the parallel computing characteristics of GPUs, Reddio improves transaction speed and efficiency on Layer2, enhancing the performance of Ethereum Layer 2.Integrating AI technology into blockchain is a novel and promising attempt.The introduction of AI can provide intelligent analysis and decision-making support for on-chain operations, thereby achieving more intelligent and dynamic blockchain applications.This cross-field integration has undoubtedly opened up new paths and opportunities for the development of the entire industry.

However, it should be noted that this field is still in its early stages and still requires a lot of research and exploration.The continuous iteration and optimization of technology, as well as the imagination and actions of market pioneers, will be the key driving force that drives this innovation toward maturity.Reddio has taken an important and bold step at this intersection, and we look forward to seeing more breakthroughs and surprises in this integration field in the future.

  • Related Posts

    Sei Lianchuang: Expanding EVM requires L1 instead of L2

    Author: Jay Jog, co-founder of Sei Labs; compiled by: Baishui, Bitchain Vision In 2017, CryptoKitties caused the Ethereum network to collapse, and the industry learned a painful lesson from the…

    Vitalik’s latest speech: Why speed up L2 confirmation?How to speed up

    Compiled by: Wuzhu, bitchain vision On April 8, 2025, Ethereum founder Vitalik gave a keynote speech at the 2025 Hong Kong Web3 Carnival Summit.Bitchain Vision compiles the speech content as…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Historic Trend: Bitcoin is Being a Safe-Habiting Asset

    • By jakiro
    • April 19, 2025
    • 19 views
    Historic Trend: Bitcoin is Being a Safe-Habiting Asset

    What makes cryptocurrency rug pull events happen frequently?

    • By jakiro
    • April 18, 2025
    • 17 views
    What makes cryptocurrency rug pull events happen frequently?

    Wintermute Ventures: Why do we invest in Euler?

    • By jakiro
    • April 18, 2025
    • 15 views
    Wintermute Ventures: Why do we invest in Euler?

    Can Trump fire Powell?What economic risks will it bring?

    • By jakiro
    • April 18, 2025
    • 15 views
    Can Trump fire Powell?What economic risks will it bring?

    Glassnode: Are we experiencing a bull-bear transition?

    • By jakiro
    • April 18, 2025
    • 16 views
    Glassnode: Are we experiencing a bull-bear transition?

    The Post Web Accelerator’s first batch of 8 selected projects

    • By jakiro
    • April 17, 2025
    • 49 views
    The Post Web Accelerator’s first batch of 8 selected projects
    Home
    News
    School
    Search