NVIDIA Hopper

NVIDIA H100 NVL

H100 NVL pairs two Hopper GPUs over ultra-fast NVLink for blistering LLM inference throughput.

Launch year
2023
Memory
2 × 94 GB HBM3
Memory bandwidth
>3.3 TB/s
Peak FP16 / FP32
480 TFLOPS · 80 TFLOPS

Market snapshot

$1.40 /hr

Range $1.40 – $23.31

Catalog coverage

36 live offerings

Across 1 providers · 5 regions

  • Dual H100 package
  • Shared high-bandwidth NVLink
  • Optimized for large-model inference
NVIDIA H100 NVL render

Last refreshed Oct 20, 2025, 1:51 AM

Performance snapshot

Normalized versus NVIDIA A100 (=1.0). Values use public reference benchmarks for training and inference workloads.

  • AI Inference×2.30
  • Transformer Throughput×2.20
  • Memory Bandwidth×1.90

Provider availability

Price bands per provider pulled from the live catalog.

  • RunPod logoRunPod36 offers
    $1.40$23.31 /hr

Popular regions

  • CA18 offers
  • OC-AU-16 offers
  • US-KS-26 offers
  • HR4 offers
  • US-CA-22 offers

Weighted average price $7.02 /hr · median $5.18