SAN JOSE, Calif., Oct. 17, 2025 /PRNewswire/ — Global data centers are facing new challenges driven by AI – greater compute demand, larger working sets, and more stringent energy efficiency requirements. At this year’s OCP Global Summit, Compal Electronics (Compal; Ticker: 2324.TW) presented a comprehensive vision for the data center of the future, delivering end-to-end solutions that cover compute, memory, and cooling.
In terms of compute power, Compal is showcasing its latest AI server — SGX30-2 / 10U — based on the NVIDIA HGX B300 platform. The system is built on the NVIDIA Blackwell architecture, supports eight NVIDIA Blackwell Ultra GPUs connected with fifth-generation NVIDIA NVLink, and features dual Intel® Xeon® 6 processors, purpose-built for large-scale AI model training, inference, and HPC (high-performance computing) workloads.
NVIDIA Blackwell Ultra GPUs are based on NVIDIA Blackwell architecture, provides up to 2.1 TB of HBM3e memory and 1.8 TB/s GPU-to-GPU NVLink bandwidth, for a total interconnect bandwidth of 14.4 TB/s, ensuring low-laten, high-speed access to massive working sets. It delivers 144 PFLOPS of FP4 inference performance and approximately 72 PFLOPS of FP8 performance, representing a 7x increase in compute performance compared to the previous NVIDIA Hopper generation. This design enables enterprises to perform high-throughput training and efficient inference on a single platform, significantly shortening AI model development and deployment cycles.
The showcase also featured CXL (Compute Express Link) and RDMA (Remote Direct Memory Access) technologies for AI (Artificial Intelligence) Memory Expansion, addressing the growing memory bottlenecks in workloads such as large language model training and HPC. Through the CXL.mem protocol, servers can seamlessly access pooled SCM (Storage-Class Memory)-based Memory Expanders within a rack, enabling CPUs and GPUs to process larger datasets beyond HBM (High Bandwidth Memory) limits with cache-coherent efficiency.
Within the evolving memory architecture, bridging GPU memory and storage a direct path through PCIe interface. It’s intelligent DMA (Direct Memory Access) offload and low-latency data-path design transform conventional NVMe (Non-Volatile Memory Express) storage into a memory-like extension, realizing the Storage-as-Memory concept that enables high-speed data access without increasing CPU overhead.
Extending beyond the rack, RDMA allows direct data movement across servers and data centers via InfiniBand or RoCE (RDMA over Converged Ethernet) networks, supporting large-scale memory disaggregation and resource pooling. Together, these technologies redefine how data flows across AI clusters, creating a unified, reconfigurable, and energy-efficient infrastructure that evolves from today’s PCIe-based systems toward next-generation GPU direct storage and CXL-enabled architectures.
With this comprehensive showcase, Compal demonstrates a clear, practical path from today’s infrastructure to the data center of the future, and underscores its role as a true system integrator — not just a technology provider, but a long-term strategic partner for enterprises in their digital transformation journey,” said Alan Chang, Vice President of the Infrastructure Solutions Business Group at Compal.
About Compal
Founded in 1984, Compal is a leading manufacturer in the notebook and smart device industry, creating brand value in collaboration with various sectors. Its groundbreaking product designs have received numerous international awards. In 2025, Compal was recognized by CommonWealth Magazine as one of Taiwan’s top 7 manufacturers and has consistently ranked among the Forbes Global 2000 companies. In recent years, Compal has actively developed emerging businesses, including cloud servers, auto electronics, and smart medical, leveraging its integrated hardware and software R&D and manufacturing capabilities to create relevant solutions. More information, please visit https://www.compal.com
Media Contact
Jack Wang Vice president and Spokesperson +886-2-8797-8588 [email protected]