Decentralized AI Edge Computing Network

In 2024, Microsoft released the AIPC (AI Processing Interface) standard, setting a benchmark for local AI computing capabilities. The AIPC standard requires a minimum of 40 TOPs (Tera Operations Per Second) NPU (Neural Processing Unit) capacity for local AI Copilot execution. Microsoft also established standards for Windows Copilot support, including NPU performance, memory, and other component requirements to enable the use of generative AI and large language models (LLMs).

In September 2024, Apple released the iPhone 16, featuring an AI assistant called Apple Intelligence. Apple announced that Apple Intelligence would be integrated across its entire ecosystem, including iPhones, Macs, and iPads, by combining local private data, on-device AI models (AFM On Device), cloud-based AI models (AFM on Server), and integration with Chat-GPT and Gemini capabilities. This expansion aims to provide users with an all-encompassing smart assistant that supports communication, task completion, and self-expression, all while prioritizing user privacy.

The EdgeX project features a distributed AI computing network that leverages both edge devices and server resources to create a collaborative AI computing environment. This network is designed to meet the needs of various AI agents, each with its unique application characteristics and computational requirements. EdgeX focuses on the computational demands of hardware AI models, aiming for a minimum capacity of 40 TOPs per hardware unit. To achieve this, high-performance inference micro-adjustment servers are deployed at edge nodes to support multi-model scheduling and fusion tasks on terminals.

EdgeX currently enables businesses to quickly create revenue-generating scenarios for various applications, such as AI large language models (LLMs) (Llama 8B, 7B, and 3B), edge deployment, AI small models (e.g., intelligent perception), edge network applications, IoT hardware AI services, and more.

Last updated