landing page banner a100
A100 Data Sheet
A100 Product Brief

Experience unprecedented hardware acceleration with the A100

NVIDIA A100 Tensor Core GPU delivers at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC. Powered by the NVIDIA Ampere Architecture, A100 is the engine of the NVIDIA data center platform.

A100 provides up to 20X higher performance over the prior generation and can be partitioned to dynamically adjust to shifting demands. Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.

More on Nvidia’s GPU Acceleration

A100 provides strong scaling for GPU compute and DL applications running in single– and multi-GPU workstations, servers, clusters, cloud data centers, systems at the edge, and supercomputers. The A100 GPU enables building elastic, versatile, and high throughput data centers.

The A100 GPU includes a revolutionary new multi-instance GPU (MIG) virtualization and can be partitioned into seven different instances. When configured for MIG operation, the A100 permits CSPs to improve the utilization rates of their GPU servers, delivering up to 7x more GPU Instances for no additional cost. Robust fault isolation allows them to partition a single A100 GPU safely and securely.

More on the A100 for HPC

Questions? Ask us anything, we’d love to talk!

Chat with an Expert