NVIDIA A100 Enterprise 40GB


The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications.

4,500

CUDA Cores 6912
Streaming Multiprocessors 108
Tensor Cores | Gen 3 432
GPU Memory 40 GB HBM2e ECC on by Default
Memory Interface 5120-bit
Memory Bandwidth 1555 GB/s
NVLink 2-Way, 2-Slot, 600 GB/s Bidirectional
MIG (Multi-Instance GPU) Support Yes, up to 7 GPU Instances
FP64 9.7 TFLOPS
FP64 Tensor Core 19.5 TFLOPS
FP32 19.5 TFLOPS
TF32 Tensor Core 156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core 312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core 312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core 624 TOPS | 1248 TOPS*
Thermal Solutions Passive
vGPU Support NVIDIA Virtual Compute Server (vCS)
System Interface PCIE 4.0 x16
Category:

Based on 0 reviews

0.0 overall
0
0
0
0
0

Only logged in customers who have purchased this product may leave a review.

There are no reviews yet.