X14 Hyper Datasheet
New X14 Hyper 1U and 2U rackmount systems featuring Intel® Xeon® 6700/6500 series processors with P-cores and 6700 series processors with E-cores
New X14 Hyper 1U and 2U rackmount systems featuring Intel® Xeon® 6700/6500 series processors with P-cores and 6700 series processors with E-cores
New X14 GrandTwin® multi-node systems featuring Intel® Xeon® 6700/6500 series processors with P-cores and 6700 series processors with E-cores
New X14 CloudDC systems with DC-MHS, featuring Intel® Xeon® 6700/6500 series processors with P-cores and 6700 series processors with E-cores
New X14 BigTwin® multi-node systems featuring Intel® Xeon® 6700/6500 series processors with P-cores and 6700 series processors with E-cores
Supermicro’s SuperCluster, accelerated by the NVIDIA Blackwell Platform, empowers the next stage of AI, defined by new breakthroughs, including the evolution of scaling laws and the rise of reasoning models. These new SuperCluster offerings powered by the NVIDIA Blackwell Platform are available in 42U, 48U, or 52U configurations. The upgraded cold plates and 250kW coolant distribution unit (CDU) more than double the cooling capacity of the previous generation. The new vertical coolant distribution manifold (CDM) means that horizontal manifolds no longer occupy valuable rack space. NVIDIA Quantum InfiniBand or NVIDIA Spectrum™ networking in a centralized rack enables a non-blocking, 256-GPU scalable unit in five racks, or an extended 768-GPU scalable unit in nine racks.
Supermicro’s SuperCluster accelerated by the NVIDIA Blackwell Platform, empowers the next stage of AI, defined by new breakthroughs, including the evolution of scaling laws and the rise of reasoning models. Supermicro’s new air-cooled SuperCluster is composed of the new Supermicro NVIDIA HGX B200 8-GPU systems. Featuring a redesigned 10U chassis to accommodate the thermals of its leading-edge AI compute performance, it is designed to tackle heavy AI workloads of all types, from training to fine-tuning to inference. NVIDIA Quantum InfiniBand or NVIDIA Spectrum™ networking in a centralized rack enables a non-blocking, 256-GPU scalable unit in nine racks.
Liquid-cooled Exascale Compute in a Rack with 72 NVIDIA Blackwell GPUs
Supermicro Servers Show Increased Performance Per Watt with High-End 4thGen AMD EPYC™ Processors
The transformative nature of AI is creating an opportunity for emerging organizations and nations to establish a sovereign AI platform. These platforms enable entities to safeguard regional interests and develop a competitive edge in the global market. With their existing ecosystem and infrastructure, telecom companies are ideally positioned to seize this opportunity.
Download this whitepaper to learn more about the development of sovereign AI and how telcos can leverage this development.
Supermicro Hyper Servers, a flagship product line, demonstrate leading performance with Intel® Xeon® 6 Processors with P-Cores in a Range of Standard Performance Tests.
A Comprehensive Guide to Supermicro Systems with NVIDIA GH200 Grace Hopper™, NVIDIA H200 Tensor Core GPU and NVIDIA H100 Tensor Core GPU
Reliable and Scalable Solution Enables KHNP to Simulate Nuclear Power Plants
HEROZ Selects Supermicro SuperBlade to Provide Consistent Support for AI, Which Plays a Central Role in DX in Various Industries, from Conceptualization to Implementation and Operation
SYS-E403-14B Cyber Flyaway Kits are based on Mission Critical Flexible Building Blocks. Supermicro’s SYS-E403-14B Mobile Edge AI/ML/HPC Server is the industry’s highest performing edge AI server. With compelling price performance at 35 lbs easily fitting into any Airplane overhead bin as carry-on luggage.
With 100,000 NVIDIA H100 GPUs, this multi-billion-dollar AI cluster is notable not just for its size but also for the speed at which it was built.