NVIDIA Hopper

NVIDIA H100

H100 is NVIDIA's flagship Hopper GPU for large-scale AI training and inference, introducing FP8 Transformer Engine compute.

Launch year
2022
Memory
80 GB HBM3
Memory bandwidth
>3.0 TB/s
Peak FP16 / FP32
400 TFLOPS · 67 TFLOPS

Market snapshot

$0.95 /hr

Range $0.95 – $126.64

Catalog coverage

495 live offerings

Across 6 providers · 74 regions

  • Transformer Engine mixed precision
  • NVLink 4 and NVSwitch
  • Strong cloud availability
NVIDIA H100 render

Last refreshed Oct 20, 2025, 1:53 AM

Performance snapshot

Normalized versus NVIDIA A100 (=1.0). Values use public reference benchmarks for training and inference workloads.

  • AI Training (FP8)×1.90

    ≈1.9× A100 throughput

  • AI Inference (FP16)×1.80
  • Memory Bandwidth×1.60

Provider availability

Price bands per provider pulled from the live catalog.

  • Datacrunch logoDatacrunch40 offers
    $0.95$15.92 /hr
  • RunPod logoRunPod126 offers
    $1.25$23.31 /hr
  • Nebius logoNebius4 offers
    $1.25$23.60 /hr
  • Google Cloud logoGoogle Cloud202 offers
    $1.98$126.64 /hr
  • Lambda Labs logoLambda Labs85 offers
    $2.49$23.92 /hr
  • Amazon Web Services logoAmazon Web Services38 offers
    $2.57$92.47 /hr

Popular regions

  • CA34 offers
  • ZA16 offers
  • EUR-IS-316 offers
  • US-KS-212 offers
  • EU-FR-112 offers
  • FIN-0110 offers
  • FIN-0210 offers
  • FIN-0310 offers

Weighted average price $20.31 /hr · median $9.80