NVIDIA H100 SXM: Specs, Pricing & Cloud Availability

Last updated: 2026-03-19

We earn commissions when you shop through the links on this page. Learn more

Technical Specifications

ArchitectureHopper
VRAM80 GB HBM3
Memory Bandwidth3.35 TB/s
FP16 Performance1,979 TFLOPS
FP8 Performance1,979 TFLOPS
TDP700W
InterconnectNVLink 4.0

Cloud Pricing

ProviderOn-Demand $/hrSpot $/hrAvailability
Microsoft Azure$2.50-$4.50/hrN/AAvailable
RunPod$2.50-$4.50/hrN/AAvailable
Lambda Labs$2.50-$4.50/hrN/AAvailable
CoreWeave$2.50-$4.50/hrN/AAvailable
Together AI$2.50-$4.50/hrN/AAvailable
Vultr$2.50-$4.50/hrN/AAvailable
Nebius$2.50-$4.50/hrN/AAvailable
Oracle Cloud (OCI)$2.50-$4.50/hrN/AAvailable
Cudo Compute$2.50-$4.50/hrN/AAvailable
FluidStack$2.50-$4.50/hrN/AAvailable
Paperspace (DigitalOcean)$2.50-$4.50/hrN/AAvailable

Benchmarks

The NVIDIA H100 SXM delivers 1,979 TFLOPS FP16 and 989 TFLOPS FP32 performance with 3.35 TB/s memory bandwidth.

Best Use Cases

The NVIDIA H100 SXM is optimized for Large-scale LLM training and inference.

FAQ

{{FAQ_SECTION}}