AI Infrastructure Index
Tracking GPU cloud providers, inference APIs, MLOps platforms, and compute pricing across the AI stack. Automated hourly updates from public sources.
What This Index Covers
GPU Cloud Providers
Pricing, availability, and performance data for major GPU cloud platforms including on-demand and reserved instances.
Inference APIs
Latency, throughput, and cost comparisons across hosted inference endpoints for frontier and open-source models.
MLOps Platforms
Feature tracking for experiment management, model registries, deployment pipelines, and monitoring tools.
Compute Pricing
Historical and current pricing trends for AI training and inference across major cloud providers.
Methodology
Data is collected hourly via automated pipelines from official provider documentation, public APIs, and community benchmarks. All collection scripts are open-source and auditable.
Explore This Index
GPUs
- All GPUsCompare specs, VRAM, and cloud availability across all AI accelerators
- NVIDIA H200141 GB HBM3e, Hopper architecture
- NVIDIA H100 SXM80 GB HBM3, 1,979 TFLOPS FP16
- NVIDIA B200192 GB HBM3e, Blackwell architecture
- NVIDIA A100 SXM80 GB HBM2e, Ampere architecture
- NVIDIA A100 PCIe80 GB HBM2e, PCIe form factor
- NVIDIA L40S48 GB GDDR6, Ada Lovelace
- NVIDIA A1024 GB GDDR6, inference-optimized
- AMD MI300X192 GB HBM3, AMD CDNA 3
- AMD MI325X256 GB HBM3e, AMD CDNA 3
- NVIDIA RTX 309024 GB GDDR6X, Ampere consumer
- NVIDIA RTX 409024 GB GDDR6X, Ada Lovelace consumer
- NVIDIA RTX A600048 GB GDDR6, Ampere professional
Cloud Providers
- All ProvidersCompare GPU cloud providers side by side
- Microsoft AzureEnterprise SLAs, global regions, hybrid cloud
- CoreWeaveGPU-native, Kubernetes-first infrastructure
- LambdaML-optimized, Lambda Stack, reserved capacity
- RunPodServerless GPU, spot pricing, community cloud
- VultrGlobal edge, simple API, bare metal
- NebiusEU data sovereignty, DGX-ready
- Vast.aiMarketplace model, cheapest spot GPUs
- Together AIInference-optimized, open-source models
- Oracle Cloud (OCI)Bare metal, RDMA networking, superclusters
- Cudo ComputeSustainable, distributed, competitive pricing
- FluidStackLow-cost, on-demand, API-first
- PaperspaceGradient platform, notebooks, easy onboarding
Analysis & Guides
- Cloud GPU PricingLive pricing comparison across all providers
- GPU SpecificationsSide-by-side spec comparison table
- Buy vs Rent AnalysisWhen to purchase vs rent cloud GPUs
- GPU Cost OptimizationStrategies to reduce GPU compute costs
- Inference BenchmarksLatency and throughput benchmark data
- Training CostsCost estimates for training various model sizes
- Model GPU SizingMatch model parameters to GPU requirements
- Networking & InterconnectsNVLink, InfiniBand, and RDMA comparison
- AI AcceleratorsBeyond GPUs: TPUs, custom silicon, and FPGAs
- Regulatory Mandate MapAI infrastructure compliance by jurisdiction
Related Indexes
Explore other Alpha One Index research areas for a complete view of the AI ecosystem.
AI TRiSM Index
For governance and compliance frameworks covering AI infrastructure
AI LLMOps Index
For LLM deployment, inference serving, and operations tooling
AI AppSec Index
For application security tools protecting AI systems
AI Red Teaming Index
For adversarial testing methodologies and security research