NVIDIA Volta

NVIDIA V100

V100 powered the early wave of large-scale deep learning and remains widely deployed for inference and HPC workloads.

Launch year
2017
Memory
32 GB HBM2 (16 GB variants available)
Memory bandwidth
900 GB/s
Peak FP16 / FP32
125 TFLOPS · 15 TFLOPS

Market snapshot

$0.06 /hr

Range $0.06 – $33.87

Catalog coverage

1984 live offerings

Across 7 providers · 108 regions

  • Volta Tensor cores
  • NVLink 2 support
  • Affordable high-memory option
NVIDIA V100 render

Last refreshed Oct 20, 2025, 1:53 AM

Performance snapshot

Normalized versus NVIDIA A100 (=1.0). Values use public reference benchmarks for training and inference workloads.

  • AI Training×0.60
  • AI Inference×0.50
  • Memory Bandwidth×0.45

Provider availability

Price bands per provider pulled from the live catalog.

  • Datacrunch logoDatacrunch32 offers
    $0.06$1.10 /hr
  • RunPod logoRunPod46 offers
    $0.10$1.33 /hr
  • Amazon Web Services logoAmazon Web Services68 offers
    $0.38$33.87 /hr
  • Microsoft Azure logoMicrosoft Azure161 offers
    $0.57$30.49 /hr
  • Google Cloud logoGoogle Cloud1380 offers
    $0.74$26.65 /hr
  • Oracle Cloud logoOracle Cloud280 offers
    $1.48$23.60 /hr
  • Lambda Labs logoLambda Labs17 offers
    $4.40$4.40 /hr

Popular regions

  • asia-east1-c126 offers
  • us-west1-b126 offers
  • us-east1-c126 offers
  • us-west1-a126 offers
  • us-central1-a126 offers
  • us-central1-b126 offers
  • us-central1-c126 offers
  • us-central1-f126 offers

Weighted average price $7.95 /hr · median $5.53