Support > About cybersecurity > What is CXL memory pooling technology? Explained in one article!
What is CXL memory pooling technology? Explained in one article!
Time : 2025-10-30 15:54:42
Edit : Jtti

The explosive growth of computing power is posing significant challenges to traditional server architectures. The isolated memory configuration of each server results in resource utilization rates below 30%, while applications such as AI large-scale model training demand memory exponentially. Against this backdrop, CXL (Compute Express Link) memory pooling technology has emerged, fundamentally reshaping data center architecture.

CXL Technology Foundation: Building a Universal Bridge for Memory Sharing

CXL is an open standard high-speed interconnect protocol built on top of the PCIe physical layer, adding crucial memory consistency support. This technology is driven by industry leaders such as Intel, Google, and Microsoft, forming a unified industry standard.

The CXL protocol comprises three key protocols: CXL.io handles basic communication, functionally equivalent to the PCIe 5.0 protocol; CXL.cache allows accelerators to efficiently access and cache host memory; and CXL.memory enables host processors to access device-connected memory using load/store commands. The collaborative work of these three protocols enables consistent sharing of memory resources among computing devices.

CXL devices are primarily divided into three types: Type-1 devices include accelerators without local memory, such as smart network interface cards (NICs); Type-2 devices encompass accelerators with integrated local memory, such as GPUs and FPGAs; and Type-3 devices are specifically designed for memory expansion and pooling, forming the core of the memory pooling architecture.

Memory Pooling Architecture: From Exclusive Resource Ownership to a Sharing Economy

The core idea of ​​CXL memory pooling is to decouple memory from specific servers, forming a resource pool that can be shared by multiple compute nodes. Through the switching functionality introduced in CXL 2.0, memory devices can be partitioned into multiple logical devices (MLDs), allowing up to 16 hosts to access different parts of the memory simultaneously.

Compared to traditional RDMA distributed solutions, CXL memory pooling offers significant advantages. RDMA solutions suffer from problems such as increased memory costs due to the dual-ended buffer pool architecture, bandwidth waste of up to 32 times due to read/write amplification, and lack of cache consistency. CXL natively supports memory semantics, avoiding the additional overhead of data copying and providing cross-node cache consistency guarantees.

Performance Advantages: A Key Path to Overcoming the Memory Wall

The most direct value of CXL memory pooling technology lies in its comprehensive performance improvements:

Low Latency and High Bandwidth: CXL significantly outperforms RDMA in latency across all scenarios, from 64B granular access to 16K large granular data page access. CXL bandwidth throughput can reach several TB/s, with latency as low as hundreds of nanoseconds.

Improved Memory Utilization: By using a reasonable algorithm to stagger server memory usage during off-peak hours, CXL pooling technology can increase memory utilization from less than 30% to over 70%. This "on-demand allocation" model is similar to carpooling, dynamically allocating memory resources based on host needs.

Reduced Total Cost of Ownership: Compared to traditional architectures that require purchasing a large number of memory modules to achieve high-capacity configurations, the CXL AIC solution can reduce total cost by up to 25%. Enterprises no longer need to configure excess memory for each server to cope with peak loads.

Technological Evolution: From CXL 2.0 to the Composable Architecture of the Future

The current mainstream CXL 2.0 technology already supports memory pooling and switching functions, while the CXL 3.0 protocol will further significantly improve scalability, cache coherency, and interconnect architecture.

Enterprises are planning their evolution path from CXL 2.0 to 3.0, aiming to expand resource pooling in multi-level switch scenarios and introduce more resource types, ultimately leading to a composable architecture based on CXL Fabric. This architecture will support more flexible resource allocation and more efficient data flow, completely breaking down the physical boundaries of traditional servers.

With the advancement of the CXL 3.0 standard, memory resource sharing technology will further reshape data center architecture. Industry predictions indicate that although currently less than 10% of CPUs are compatible with the CXL standard, by 2027 all CPUs worldwide will support the CXL interface, and the global CXL market size is expected to reach $15 billion by 2028.

Future Outlook: A New Paradigm for Computing Infrastructure

CXL memory pooling technology represents a fundamental shift from fixed-configuration computing resources to flexible, composable infrastructure. It enables cloud computing operators and enterprise data centers to dynamically manage and allocate memory resources, just as they manage computing and storage resources.

As CXL technology matures and its ecosystem matures, future data centers will gradually move towards the vision of "complete resource decoupling"CPU, GPU, memory, and other resources will be independently pooled and dynamically combined on demand. This architectural transformation will ultimately break down the physical boundaries of traditional servers, providing more efficient and economical infrastructure support for the massive computing demands of the AI ​​era.

From remote direct memory access to memory pooling, CXL technology is ushering in a new era of high-efficiency computing. For enterprises pursuing computing power optimization and cost control, understanding and deploying CXL memory pooling technology has become a key element in maintaining future competitiveness.

Relevant contents

The speeding effect of intelligent DNS resolution on cross-border websites Strategies and Techniques for Overseas Cloud Server Rental in 2025 A letter to new users of Japanese server rental The difference between smart DNS resolution and traditional DNS resolution DNS resolution TTL setting tips: balance between speed and stability Analysis of the main differences between IPLC dedicated lines and CN2 dedicated lines What is the principle of HTTPS protocol? How to build secure network communication? What are the configuration tips for the ip6tables command? The seinfo command is a powerful tool for understanding SELinux policies The iSCSIadm command in Linux is the core tool for building an efficient storage network
Go back

24/7/365 support.We work when you work

Support