NVIDIA Ampere

NVIDIA A100

A100 set the standard for accelerated compute in the Ampere generation with MIG partitioning and balanced HPC throughput.

Launch year
2020
Memory
80 GB HBM2e (40 GB variants available)
Memory bandwidth
2.0 TB/s
Peak FP16 / FP32
312 TFLOPS · 19.5 TFLOPS

Market snapshot

$0.27 /hr

Range $0.27 – $65.54

Catalog coverage

804 live offerings

Across 7 providers · 139 regions

  • Multi-Instance GPU (MIG)
  • NVLink 3 delivering 600 GB/s
  • Broad hyperscaler availability
NVIDIA A100 render

Last refreshed Oct 20, 2025, 2:01 AM

Performance snapshot

Normalized versus NVIDIA A100 (=1.0). Values use public reference benchmarks for training and inference workloads.

  • AI Training (FP16)×1.00
  • AI Inference (FP16)×1.00
  • Memory Bandwidth×1.00

Provider availability

Price bands per provider pulled from the live catalog.

  • Datacrunch logoDatacrunch48 offers
    $0.27$9.28 /hr
  • RunPod logoRunPod116 offers
    $0.60$13.12 /hr
  • Microsoft Azure logoMicrosoft Azure226 offers
    $0.81$65.54 /hr
  • Lambda Labs logoLambda Labs85 offers
    $1.29$14.32 /hr
  • Google Cloud logoGoogle Cloud214 offers
    $1.35$60.48 /hr
  • Amazon Web Services logoAmazon Web Services35 offers
    $2.45$37.62 /hr
  • Oracle Cloud logoOracle Cloud80 offers
    $24.40$32.00 /hr

Popular regions

  • EU-RO-122 offers
  • us-central1-c18 offers
  • us-central1-a18 offers
  • europe-west4-a18 offers
  • asia-southeast1-c18 offers
  • CA16 offers
  • CA-MTL-316 offers
  • US14 offers

Weighted average price $11.90 /hr · median $6.96