Skip to main content

The Blueprint for Modern AI Data Centers

As the demand for advanced AI applications continues to grow, data centers must evolve to support massive compute loads, mission-critical workloads, and liquid cooling requirements, all without sacrificing speed, uptime, or flexibility. Supermicro’s Data Center Building Block Solution (DCBBS) is designed to provide everything that you need to outfit a modern AI data center. DCBBS is built on a modular philosophy: complex systems from validated components and sub-systems. From individual GPUs to full racks and facility-side infrastructure, Supermicro enables end-to-end deployment with ultimate flexibility.

DCBBS

Rapid Deployment with First-to-Market

The Data Center Building Block Solution (DCBBS) provides first access to cutting-edge technologies—including the latest GPUs, CPUs, interconnects, storage, networking, and liquid-cooling at any scale— to maximize performance, efficiency, and return on infrastructure investment.

One-stop-shop with Onsite Services

As your single trusted partner for AI infrastructure, Supermicro manages the complete lifecycle from design and assembly to onsite deployment and ongoing support, enabling rapid data center implementation with a monthly capacity of 5,000+ racks—including 2,000+ liquid-cooled racks—available from global production facilities.

Customized to Your Workloads

The Building Blocks DNA throughout Supermicro’s solutions allows unbeatable level of customization at server, rack, cluster, cooling, and power level, tailored for your workload and application requirements.

Validated for Fast Time to Online

Defined as part of data center building blocks, racks are fully integrated and validated at cluster level with L11 and L12 testing to accelerate time to online and assures plug-and-play deployment.

System-Level Building Blocks

Supermicro systems have long been designed with Building Block architecture. This approach is part of why Supermicro is able to offer the industry's widest portfolio of servers which allows for better optimization to the project requirements. Supermicro DCBBS starts here at the system level. It is critically important to carefully fine-tune the system bill of materials (BOM) because it establishes the balance of computing resources for the entire data center. Supermicro offers an unbeatable level of customization, with the freedom to choose individual sub-components.

Family of Supermicro servers supporting NVIDIA Blackwell accelerators

Modular Design with Endless Customization Possibilities

Fine-tune the balance of data center resources with customization down to the CPUs, GPUs, DIMMs, drives, NICs, and more with various chassis form factors. Multiple system design choices optimize I/O, thermal, power, and cabling to your data center layout.

Broad Range of Systems with Resource Optimization

Building block Solution allows broad range of optimized system designs quickly adopting cutting-edge technologies. Precisely-tailored systems fit the hardware to the application, including specialized systems for scalable AI compute, high-performance storage, and edge computing.

Advanced Cooling Systems

Reduce data center power costs and increase AI per Watt with liquid-cooled CPUs, GPUs, DIMMs, PCIe switches, VRMs, power supplies and more. Optimized airflow by advanced chassis mechanical designs push push limits for compute density and power efficiency.

Reduced Supply Chain Challenges, Fast Production

The common use of modular building block subsystems speeds up time-to-market and removes supply chain bottlenecks. Supermicro’s industry-leading manufacturing capacity with worldwide logistics assures timely assembly and delivery at scale.

4U NVIDIA HGX™ B200 8-GPU systemPower Supplies and High-speed NICsGPU Cold PlatesNVIDIA HGX B200 8-GPUCPU, DIMM, and PCIe Switch Cold PlatesHot-swappable High-performance Drives

Rack & Cluster Building Blocks

Once systems are defined, they’re integrated into rack-level building blocks, the organizational backbone of your cluster.

  • Optimized cable layouts for reduced airflow obstruction and improved performance
  • Scalable Units enable 256-node clusters and beyond
  • Non-blocking network topology for fast node-to-node communication

To achieve the complex building process of large AI clusters, such as 256 system node clusters, it can be made simpler by dividing it into smaller parts. These "scalable units" consist of groups of systems, interconnected with rail-optimized network topology, that can be further multiplied to achieve the desired cluster size.

3 types of liquid-cooled server racks: GB200 NVL72; 4U 8-GPU; 6U SuperBlade
L11 and L12 Validated Cluster Level Scalable Unit (64 Nodes with 512 GPUs)
NetworkingComputeLiquid Cooling

Compute

  • 8x SYS-422GA-NBRT-LCC or
  • AS -4126GS-NBR-LCC per rack
  • 8x NVIDIA HGX B200 8-GPU per rack
  • 64x NVIDIA B200 Tensor Core GPUs
  • 8x 1440GB HBM3e per rack
  • Flexible storage options with local or dedicated storage fabric with full NVIDIA GPU Direct RDMA support

Liquid Cooling

  • Supermicro Direct-to-Chip Liquid Cooling (DLC) cold plates for CPU, GPU, DIMM, VRM, PCIe Switch, PSU, and more
  • Supermicro 250kW capacity Coolant Distribution Unit (CDU) with redundant PSU and hot-swap pumps
  • Supermicro Coolant Distribution Manifolds (CDMs)
  • Optional 240kW or 180kW capacity Liquid-to-air solution

Networking

  • In-band management switch
  • Out-of-band IPMI management switch
  • Non-blocking network
  • Spine and Leaf switches in the dedicated networking rack or in the individual compute racks
Data Center & Liquid Cooling Infrastructure

When everything comes together properly, the data center becomes a single, functional unit of compute. In addition to the computing equipment, Supermicro provides end-to-end project management, including designing data center layouts and network topologies. After the initial consultation, Supermicro delivers a project proposal tailored to a given data center power budget, performance target, or other requirements through DCBBS.

Supermicro DLC-2

A data center resource that presents an equal or an even greater challenge than power is thermals and cooling. Supermicro leads the industry in direct-to-chip liquid cooling (DLC) technology. Liquid cooling infrastructure is planned and deployed at data center–scale, including the piping and facility-side liquid cooling tower for heat dissipation. DCBBS provides a total solution for DLC infrastructure, consisting of DLC systems, DLC cold plates, in-rack or in-row cooling distribution units, cooling distribution manifolds, cooling towers, and more.

Power Savings

Up to 40% Savings in entire data center (vs. air cooling) by using Supermicro DLC-2

Water Savings

Up to 40% Savings with 45°C warm water operation and eliminating chilled water and compressor

System Heat Capture

Up to 98% Heat capture in DLC-2 liquid cooling with CPU, GPU, PCIe Switch, DIMM, VRM, and PSU

Quiet Data Center

~50dB Significantly reduces noise with less fans and fan speed. As quiet as a library

Space Savings

Up to 60% Savings with more than 2.5x compute density compared to air-cooled systems

Features

Solution Integration

Testing and Validation

Liquid-Cooled Systems

Networking and Cabling

CDU, CDM, and Cold Plate

Cooling Tower

Learn More About DLC-2

Pre-validated, Plug-and-Play DCBBS Reference Scalable Unit

Supermicro offers ready-to-deploy DCBBS packages, including:

256-Node Scalable AI Factory

  • Based on proven deployments from the world’s largest AI clusters
  • Fully tested & scalable
  • Configurable for application types, power budgets, and GPU counts
DCBBS reference scalable units

x256 Compute Nodes

Supermicro liquid-cooled 4U B200 8-GPU system (compute node)

Supermicro NVIDIA HGX B200 8-GPU systems for a total of 2,048 GPUs

x32 Racks

Supermicro B200 48U Rack

8 4U liquid-cooled systems per rack with 250kW CDU and vertical CDM

x4 Scalable Units

Supermicro B200 48U 5-rack cluster

A scalable unit of 512 GPUs interconnected with up to 800G NVIDIA Quantum-2 InfiniBand or Spectrum™-X Ethernet

HPS Storage Fabric

Supermicro 1U petascale E3.S All-flash storage server

High-performance storage with Supermicro Petascale all-flash systems and Supermicro top/front-loading storage systems

DLC Total Solution

Supermicro liquid cooling tower

Total liquid cooling solutions with liquid-cooled systems, high-density liquid-cooled racks with in-rack or in-row CDU, and cooling tower

Global Services and Support

DCBBS includes the services required to achieve time-to-market and time-to-online quickly, without having to drain the customer’s own IT resources. Supermicro offers a full portfolio of service-level building blocks such as datacenter design, solution validation, and professional onsite deployment. It includes continued on-site support to ensure long-term success, along with a 4-hour Onsite response time option for mission-critical uptime.

Features

Global

Service Desk

Digital

Media Retention Service

Logistics

End-to-End

4-Hour

Onsite Response

Onsite

Integration Service

Parts

Replacement Service

Software

In addition to services, Supermicro has broad expertise in data center application integration, including AI training, AI inferencing, cluster management, and workload orchestration. Supermicro provides full services to do software provisioning and validation based on the customer’s software stack. Many of our system building blocks for AI Infrastructure are NVIDIA Certified, and taking advantage of NVIDIA AI Enterprise software.

SuperCloud Composer®

Manage your entire infrastructure, including 3rd party devices, at a glance with state-of-the-art dashboards. Respond and adapt quickly to dynamic business needs with storage, compute, and networking flexibility that accommodates ever-changing workload requirements.

Ready to Build the Future of AI?

Contact Supermicro today to design your next-generation AI data center.

Contact Us