NVIDIA H100 SXM: Specs, Pricing & Cloud Availability
Last updated: 2026-03-22
Technical Specifications
| Architecture | Hopper |
|---|---|
| VRAM | 80 GB HBM3 |
| Memory Bandwidth | 3.35 TB/s |
| FP16 Performance | 1,979 TFLOPS |
| FP8 Performance | 1,979 TFLOPS |
| TDP | 700W |
| Interconnect | NVLink 4.0 |
Cloud Pricing
| Provider | On-Demand $/hr | Spot $/hr | Availability |
|---|---|---|---|
| Microsoft Azure | $6.98/hr | N/A | Available |
| RunPod | $2.79/hr | N/A | Available |
| Lambda Labs | $2.99/hr | N/A | Available |
| CoreWeave | $2.06/hr | N/A | Available |
| Together AI | $4.00/hr | N/A | Available |
| Vultr | $5.27/hr | N/A | Available |
| Nebius | $3.16/hr | N/A | Available |
| Oracle Cloud (OCI) | $4.25/hr | N/A | Available |
| Cudo Compute | $2.50/hr | N/A | Available |
| FluidStack | $2.45/hr | N/A | Available |
| Paperspace (DigitalOcean) | $5.95/hr | N/A | Available |
Benchmarks
The NVIDIA H100 SXM delivers 1,979 TFLOPS FP16 and 989 TFLOPS FP32 performance with 3.35 TB/s memory bandwidth.
Best Use Cases
The NVIDIA H100 SXM is optimized for Large-scale LLM training and inference.