Embrace AI with Supermicro Deep Learning technology
Deep Learning, a subset of Artificial Intelligence (AI) and Machine Learning (ML), is the state-of-the-art procedure in Computer Science that implements multi-layered artificial neural networks to accomplish tasks that are too complicated to program. For example, Google Maps processes millions of data points every day to figure out the best route to travel, or to predict the time to arrive at the desired destination. Deep Learning comprises two parts- training and Inference. The training part of Deep Learning involves processing as many data points as possible to make the neural network ‘learn’ the feature on its own and modify itself to accomplish tasks like image recognition, speech recognition, etc. The inference part refers to the process of taking a trained model and using it to make useful predictions and decisions. Both training and inferencing require enormous amounts of computing power to achieve the desired accuracy and precision.
Partner Solutions
AI & Deep Learning Platform
Our solution offers custom Deep Learning framework installation, so that the end user can directly start deploying Deep Learning projects without any GPU programming. Our solution provides customized installation of deep learning frameworks including TensorFlow, Caffe2, MxNet, Chainer, Microsoft Cognitive Toolkit as well as others.
The Supermicro AI & Deep Learning solution provides a complete AI/ Deep Learning software stack. Below is the software stack offered with the end-to-end fully integrated solution:
AI & Deep Learning Software Stack | ||
---|---|---|
Deep Learning Environment | Frameworks | Caffe, Caffe2, Caffe-MPI, Chainer, Microsoft CNTK, Keras, MXNet, TensorFlow, Theano, PyTorch |
Libraries | cnDNN, NCCL, cuBLAS | |
User Access | NVIDIA DIGITS | |
Operating Systems | Ubuntu, Docker, NVIDIA Docker |
Supermicro AI & Deep Learning Solution Advantages
- Powerhouse for Computation
- The Supermicro AI & Deep Learning cluster is powered by Supermicro SuperServer® systems, which are high density and compact powerhouses for computation. The cluster features the latest GPUs from Supermicro partner NVIDIA. Each compute node utilizes NVIDIA® Tesla® V100 GPUs.
- High Density Parallel Compute
- Up to 32 GPUs with up to 1TB of GPU memory for maximum parallel compute performance resulting in reduced training time for Deep Learning workloads.
- Increased Bandwidth with NVLink
- Utilizes NVLink™, which enables faster GPU-GPU communication, further enhancing system performance under heavy Deep Learning workloads.
- Faster Processing with Tensor Core
- NVIDIA Tesla V100 GPUs utilize the Tensor Core architecture. Tensor cores contain Deep Learning support and can deliver up to 125 Tensor TFLOPS for training and inference applications.
- Scalable Design
- Scale-out architecture with 100G IB EDR fabric, extremely scalable to fit future growth.
- Rapid Flash Xtreme (RFX) – High performance All-flash NVMe storage
- RFX is the top-of-the-line complete storage system, developed and completely tested for AI & Deep Learning applications that incorporate the Supermicro BigTwin™ along with WekaIO parallel filing system.
AI & Deep Learning Reference Architecture Configuration
Supermicro is currently offering the following complete solutions that are thoroughly tested and ready-to-go. These clusters can be scaled up & down to meet the needs of your Deep Learning projects.
- HPC, Artificial Intelligence, Big Data Analytics, Research Lab, Astrophysics, Business Intelligence
- Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; dual UPI up to 10.4GT/s
- 12 DIMMs; up to 3TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
- Supports Intel® Optane™ DCPMM*
- 2 Hot-swap 2.5" drive bays, 2 Internal 2.5" drive bays
- 4 PCI-E 3.0 x16 slots
- 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
- 1 VGA, 2 COM, 2 USB 3.0 (rear)
- 7x 4cm heavy duty counter-rotating fans with air shroud
- 2000W Redundant Titanium Level (96%) Power Supplies
*Contact your Supermicro sales rep for more info.
- Artificial Intelligence, Big Data Analytics, High-performance Computing, Research Lab/National Lab, Astrophysics, Business Intelligence
- Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
- 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
- Supports Intel® Optane™ DCPMM*
- 16 Hot-swap 2.5" drive bays (support 8 NVMe drives)
- 4 PCI-E 3.0 x16 (LP, GPU tray for GPUDirect RDMA), 2 PCI-E 3.0 x16 (LP, CPU tray)
- 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
- 1 VGA, 1 COM, 2 USB 3.0 (front)
- 8x 92mm cooling fans, 4x 80mm cooling fans
- 2200W (2+2) Redundant Titanium Level (96%) Power Supplies
*Contact your Supermicro sales rep for more info.
- AI/Deep Learning, Video Transcoding
- Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
- 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
- Supports Intel® Optane™ DCPMM*
- 24 Hot-swap 3.5" drive bays, 2 optional 2.5" U.2 NVMe drives
- 20 PCI-E 3.0 x16 slots, 1 PCI-E 3.0 x8 (FHFL, in x16 slot)
- 2x 10GBase-T ports via Intel C622, 1 Dedicated IPMI port
- 1 VGA, 1 COM, 4 USB 3.0 (rear)
- 8x 92mm RPM Hot-Swappable Cooling Fans
- 2000W (2+2) Redundant Titanium Level (96%) Power Supplies
*Contact your Supermicro sales rep for more info.
- AI/Deep Learning, High-performance Computing
- Dual Socket P (LGA 3647) support: 2nd Gen. Intel® Xeon® Scalable processors; 3 UPI up to 10.4GT/s
- 24 DIMMs; up to 6TB 3DS ECC DDR4-2933 MHz RDIMM/LRDIMM
- Supports Intel® Optane™ DCPMM*
- 16 Hot-swap 2.5" NVMe drive bays, 6 Hot-swap 2.5" SATA3 drive bays
- 16 PCI-E 3.0 x16 slots for RDMA via IB EDR, 2 PCI-E 3.0 x16 on board
- 2x 10GBase-T ports via Intel X540, 1 Dedicated IPMI port
- 1 VGA, 1 COM, 2 USB 3.0 (front)
- 6x 80mm hot-swap PWM Fans, 8x 92mm Hot-swap Fans
- 6x 3000W Redundant Titanium Level (96%) Power Supplies
*Contact your Supermicro sales rep for more info.