NVIDIA H100 SXM: Specs, Pricing & Cloud Availability

Last updated: 2026-03-22

Technical Specifications

ArchitectureHopper
VRAM80 GB HBM3
Memory Bandwidth3.35 TB/s
FP16 Performance1,979 TFLOPS
FP8 Performance1,979 TFLOPS
TDP700W
InterconnectNVLink 4.0

Cloud Pricing

ProviderOn-Demand $/hrSpot $/hrAvailability
Microsoft Azure$6.98/hrN/AAvailable
RunPod$2.79/hrN/AAvailable
Lambda Labs$2.99/hrN/AAvailable
CoreWeave$2.06/hrN/AAvailable
Together AI$4.00/hrN/AAvailable
Vultr$5.27/hrN/AAvailable
Nebius$3.16/hrN/AAvailable
Oracle Cloud (OCI)$4.25/hrN/AAvailable
Cudo Compute$2.50/hrN/AAvailable
FluidStack$2.45/hrN/AAvailable
Paperspace (DigitalOcean)$5.95/hrN/AAvailable

Benchmarks

The NVIDIA H100 SXM delivers 1,979 TFLOPS FP16 and 989 TFLOPS FP32 performance with 3.35 TB/s memory bandwidth.

Best Use Cases

The NVIDIA H100 SXM is optimized for Large-scale LLM training and inference.

FAQ

{{FAQ_SECTION}}