Server Design Brings Open, High- Density Compute to Edge
Supermicro, Red Hat, Senao, Lanner and Intel join forces to create a family of edge computing solutions that are AI-ready and scalable for workload consolidation

Supermicro, Red Hat, Senao, Lanner and Intel join forces to create a family of edge computing solutions that are AI-ready and scalable for workload consolidation

AI Performance and Efficiency at the Intelligent Edge, Powered by Intel
AI Factories from Supermicro and NVIDIA are complete, turnkey solutions designed to simplify enterprise AI deployment at scale, delivering faster time‑to‑online and time‑to‑revenue. These end‑to‑end AI infrastructure solutions combine high‑performance GPU compute, AI software, high‑speed networking, and scalable storage to accelerate data‑center‑ready AI workloads.

Supermicro, Red Hat, Senao, Lanner and Intel join forces to create a family of edge computing solutions that are AI-ready and scalable for workload consolidation

AI Performance and Efficiency at the Intelligent Edge, Powered by Intel
AI Factories from Supermicro and NVIDIA are complete, turnkey solutions designed to simplify enterprise AI deployment at scale, delivering faster time‑to‑online and time‑to‑revenue. These end‑to‑end AI infrastructure solutions combine high‑performance GPU compute, AI software, high‑speed networking, and scalable storage to accelerate data‑center‑ready AI workloads.

The new Supermicro Super AI Station for AI inferencing, model training, and fine-tuning – featuring the NVIDIA BG300 Grace™ Blackwell Ultra Desktop Superchip

Powerful, flexible multi-workload acceleration from AI factory, to data center, to edge

AI factories from Supermicro and NVIDIA are complete, turnkey solutions simplifying the deployment of AI at any scale, first-to-market, and backed by rack-level integration delivering complete AI confidence.
Join Kitana and Rudy from the Technology Enablement team at Supermicro as they introduce the ideal solution for AI development, the Super AI Station. Supermicro’s ARS-511GD-NB-LCC, featuring the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, is a deskside, liquid‑cooled system that delivers data center–class AI performance right under your desk for training, fine‑tuning, and high‑throughput inference on large models.
Supermicro BigTwin® provides maximum compute and storage density with power efficiency, making it a compelling choice for modern workloads that demand scalability, flexibility, and performance in any energy-constrained environments.
Supermicro SuperBlade® delivers unmatched compute density and exceptional performance in a compact form factor, empowering data centers to achieve more with less.

Supermicro's Rear Door Heat Exchanger (RDHx) is a plug-and-cool, air-to-liquid solution for data centers, providing up to 80kW cooling capacity with universal rack compatibility, redundant fans (4-5 units), smart anti-condensation, and protocols like Redfish, SNMP, and web UI for efficient heat removal in high-density AI and HPC setups.

Supermicro's Rear Door Heat Exchanger (RDHx) is a plug-and-cool, air-to-liquid solution for data centers, providing up to 50kW cooling capacity with universal rack compatibility, redundant fans (6-10 units), smart anti-condensation, and protocols like Redfish, SNMP, and web UI for efficient heat removal in high-density AI and HPC setups.

SteelDome on Supermicro BigTwin® provides a validated, high-density path to modern infrastructure – unifying storage, virtualization, and orchestration in a single platform. Designed to scale without disruptive migrations and built to support performance-intensive workloads with strong resilience, the combination of cluster-first software and cluster-friendly BigTwin hardware enables customers to deploy quickly, operate, and scale confidently.

Purpose-built AI Factories, powered by Supermicro systems and NVIDIA accelerated computing, allow operators to deploy secure, scalable, GPU-dense AI environments that support GenAI, RAG, LLM training, and real-time inference — while enabling new revenue opportunities such as Sovereign AI-as-a-Service.

Sovereign AI and Hybrid GPU Clouds Benefit from an Integrated Solution

xiNAS is Xinnor’s high-performance NFS server solution designed for AI, HPC, and other throughput-hungry environments. This document presents a validation of xiNAS on a Supermicro NVMe server, demonstrating performance and resilience across multi-client and multi-server scenarios, including degraded and rebuild states.