top of page
server-parts.eu

server-parts.eu Blog

NVIDIA A100 80GB PCIe Price, Specs & Performance

  • Feb 9, 2024
  • 4 min read

Updated: 15 hours ago

The NVIDIA A100 80GB PCIe is a data center GPU built on the Ampere architecture, designed for AI training, AI inference, and high-performance computing (HPC).


NVIDIA A100 80GB PCIe GPUs

✔️ 5-Year Warranty – No Risk: Pay Only After Testing


This guide covers full specifications, FP64 clarification, memory bandwidth, MIG support, and realistic market pricing.


NVIDIA A100 80GB PCIe GPU – Ampere architecture, 80GB HBM2e memory, 1,935 GBs bandwidth, 9.7  19.5 TFLOPS FP64, 156 TF32, 312 FP16, 300W TDP, MIG, data center AI HPC server-parts.eu refurbished


NVIDIA A100 80GB PCIe – Technical Specifications

Specification

NVIDIA A100 80GB PCIe

Architecture

NVIDIA Ampere

GPU Memory

80GB HBM2e

Memory Bandwidth

1,935 GB/s

CUDA Cores

6,912

Tensor Cores

432 (3rd Generation)

FP64 (CUDA Cores)

9.7 TFLOPS

FP64 (Tensor Cores)

19.5 TFLOPS

TF32 (Tensor Cores)

156 TFLOPS

FP16 (Tensor Cores)

312 TFLOPS

Interface

PCIe Gen4 x16

TDP

300W

Cooling

Passive

MIG

Up to 7 GPU instances

These values apply specifically to the 80GB PCIe variant, not the SXM model.



FP64 Performance Explained (9.7 vs 19.5 TFLOPS) - NVIDIA A100 80GB PCIe


The NVIDIA A100 80GB PCIe lists two FP64 figures:

  • 9.7 TFLOPS → Standard double precision via CUDA cores

  • 19.5 TFLOPS → FP64 using Tensor Cores


The higher number applies when workloads are optimized for Tensor Core acceleration. For traditional CUDA-only double-precision workloads, 9.7 TFLOPS is the relevant baseline. This distinction is important when sizing HPC clusters.



Memory and Bandwidth - NVIDIA A100 80GB PCIe


The NVIDIA A100 80GB PCIe delivers:

  • 80GB HBM2e memory

  • 1,935 GB/s memory bandwidth


High bandwidth is critical for:

  • Large transformer model training

  • Memory-bound simulations

  • Multi-user GPU partitioning


For comparison, the NVIDIA Tesla V100 32GB provides 900 GB/s bandwidth and 32GB memory. The A100 significantly increases both capacity and throughput.

The SXM version of the A100 reaches ~2,039 GB/s due to higher clocks and power limits.



Compute Performance by Precision Mode - NVIDIA A100 80GB PCIe


  • FP64 (CUDA): 9.7 TFLOPS

  • FP64 (Tensor Core): 19.5 TFLOPS

  • TF32: 156 TFLOPS

  • FP16: 312 TFLOPS


Real-world performance depends on:

  • Model architecture

  • Software stack (CUDA, cuDNN, TensorRT)

  • Precision mode

  • PCIe bandwidth

  • CPU platform


Performance claims such as “20X faster” or “249X vs CPU” originate from NVIDIA benchmark scenarios (e.g., BERT-Large inference with optimized INT8 or sparsity). Actual results vary.



MIG (Multi-Instance GPU) Support - NVIDIA A100 80GB PCIe


The NVIDIA A100 80GB PCIe supports Multi-Instance GPU (MIG). It can be partitioned into up to seven isolated GPU instances.


MIG is useful for:

  • Multi-tenant AI environments

  • Kubernetes deployments

  • Inference workloads

  • Resource isolation in shared clusters


Each MIG profile allocates defined compute cores and memory slices.



Power and Cooling Requirements - NVIDIA A100 80GB PCIe


NVIDIA A100 80GB PCIe Power

  • 300W TDP per GPU

  • Adequate PSU headroom required in multi-GPU systems


NVIDIA A100 80GB PCIe Cooling

  • Passive cooling design

  • Requires high-airflow rack server chassis


NVIDIA A100 80GB PCIe PCIe Compatibility

  • PCIe Gen4 recommended

  • PCIe Gen3 supported (reduced bandwidth)


The A100 PCIe is a compute-only GPU and does not include display outputs.



NVIDIA A100 80GB PCIe Price (New & Refurbished)


The price of the NVIDIA A100 80GB PCIe depends on:

  • Market demand cycles

  • Supply from hyperscalers

  • Warranty coverage

  • Region

  • Bulk quantity


There is no fixed global street price.


Typical Market Price Range (2024–2026) - NVIDIA A100 80GB PCIe


New Units (when available): NVIDIA A100 80GB PCIe

Historically: €18,000 – €25,000+ per GPU

Availability from distribution is limited due to focus on newer generations.


Refurbished / Secondary Market Units: NVIDIA A100 80GB PCIe

Typically: €9,000 – €16,000 per GPU


Price varies based on:

  • Warranty length (1–3 years)

  • Testing documentation

  • Physical condition

  • Market demand


During AI demand spikes, refurbished pricing has increased significantly.


Cost-Per-Performance Perspective: NVIDIA A100 80GB PCIe

When evaluating price, consider:

  • 80GB HBM2e capacity

  • 1.9 TB/s bandwidth

  • FP64 capability (9.7 / 19.5 TFLOPS)

  • MIG partitioning


For many AI and HPC deployments, the A100 still delivers strong price-to-performance value, especially in refurbished cluster builds.


Refurbished NVIDIA A100 80GB PCIe – What to Check


If buying refurbished:

  • Request stress test documentation

  • Verify firmware and driver compatibility

  • Confirm warranty terms

  • Inspect for data center extraction damage


Silicon performance is identical between new and properly tested refurbished units.



FAQ – NVIDIA A100 80GB PCIe


What is the FP64 performance of NVIDIA A100 80GB PCIe?

It delivers 9.7 TFLOPS FP64 via CUDA cores and up to 19.5 TFLOPS using Tensor Cores.


What is the memory bandwidth of the NVIDIA A100 80GB PCIe?

The GPU provides 1,935 GB/s memory bandwidth with 80GB HBM2e memory.


How much does NVIDIA A100 80GB PCIe cost?

Typically €9,000–€25,000 per unit depending on condition (new vs refurbished), warranty, and market demand.


Does NVIDIA A100 80GB PCIe support MIG?

Yes. It supports up to seven isolated GPU instances via Multi-Instance GPU (MIG).


What is the power consumption?

The NVIDIA A100 80GB PCIe has a 300W TDP and requires high-airflow server cooling.


What is the difference between NVIDIA A100 PCIe and SXM?

The PCIe version runs at 300W over PCIe Gen4. The SXM version supports NVLink, higher power limits, and slightly higher memory bandwidth (~2,039 GB/s).



Technical Summary - NVIDIA A100 80GB PCIe


The NVIDIA A100 80GB PCIe provides:

  • 80GB HBM2e memory

  • 1,935 GB/s bandwidth

  • 9.7 TFLOPS FP64 (CUDA)

  • 19.5 TFLOPS FP64 (Tensor Core)

  • 156 TFLOPS TF32

  • MIG partitioning

  • 300W TDP


For PCIe-based enterprise AI and HPC infrastructure, it remains one of the most widely deployed Ampere-generation GPUs.



NVIDIA A100 80GB PCIe GPUs

✔️ 5-Year Warranty – No Risk: Pay Only After Testing



Sources - NVIDIA A100 80GB PCIe






CUDA performance and precision formats: https://docs.nvidia.com/cuda/

bottom of page