NVIDIA H100 SXM: Specs, Pricing & Cloud Availability
Last updated: 2026-03-19
We earn commissions when you shop through the links on this page. Learn more
Technical Specifications
| Architecture | Hopper |
|---|---|
| VRAM | 80 GB HBM3 |
| Memory Bandwidth | 3.35 TB/s |
| FP16 Performance | 1,979 TFLOPS |
| FP8 Performance | 1,979 TFLOPS |
| TDP | 700W |
| Interconnect | NVLink 4.0 |
Cloud Pricing
| Provider | On-Demand $/hr | Spot $/hr | Availability |
|---|---|---|---|
| Microsoft Azure | $2.50-$4.50/hr | N/A | Available |
| RunPod | $2.50-$4.50/hr | N/A | Available |
| Lambda Labs | $2.50-$4.50/hr | N/A | Available |
| CoreWeave | $2.50-$4.50/hr | N/A | Available |
| Together AI | $2.50-$4.50/hr | N/A | Available |
| Vultr | $2.50-$4.50/hr | N/A | Available |
| Nebius | $2.50-$4.50/hr | N/A | Available |
| Oracle Cloud (OCI) | $2.50-$4.50/hr | N/A | Available |
| Cudo Compute | $2.50-$4.50/hr | N/A | Available |
| FluidStack | $2.50-$4.50/hr | N/A | Available |
| Paperspace (DigitalOcean) | $2.50-$4.50/hr | N/A | Available |
Benchmarks
The NVIDIA H100 SXM delivers 1,979 TFLOPS FP16 and 989 TFLOPS FP32 performance with 3.35 TB/s memory bandwidth.
Best Use Cases
The NVIDIA H100 SXM is optimized for Large-scale LLM training and inference.