Supermicro launches highly optimized AI solutions, based on AMD Instinct MI350 Series GPUs and AMD ROCm™ Software, delivering breakthrough inference performance and power efficiency.
The new Supermicro H14 GPU solutions are powered by the newest 4th generation AMD CDNA™ architecture, delivering optimized performance and efficiency for large AI training models and high-speed inference workloads
Large memory capacity with 2.304TB total HBM3e per 8-GPU server which delivers faster computation and more efficient scaling for AI, inferencing, and training
Offers an industry-leading portfolio of more than 30 solutions
designed for air or liquid-cooled NVIDIA HGX™ B200, liquid-cooled
NVIDIA GB200 NVL72, and NVIDIA RTX PRO 6000 Blackwell Server
Edition
Speeds up time-to-online through NVIDIA Certified systems and
NVIDIA Enterprise AI Factory Validated Designs
Future-ready solution stack supports upcoming NVIDIA GB300
NVL72 and HGX B300 NVL8 for seamless technology transitions
More than 20 systems are available with NVIDIA RTX PRO 6000 Blackwell GPUs including NVIDIA-Certified systems
Supermicro collaborating on development of new NVIDIA Enterprise
AI factory validated designs based on RTX PRO Servers and NVIDIA
HGX™ B200 systems
Supermicro's new 4-GPU system based on the NVIDIA MGX™
reference design, bringing NVIDIA RTX PRO Server closer to the
Edge for more powerful AI inference
Easy-to-design, easy-to-build, easy-to-deploy, and easy-to-operate solution for all critical computing and cooling infrastructure
Quick time-to-deployment and quick time-to-online with everything required to fully outfit AI/IT data centers
Saving cost with modularized building block solution architecture from system to rack to data center scale
High quality and high availability with Supermicro's industry-leading design, manufacturing capacity, management software, on-site services, and global support