GPU Cloud Providers
Pricing, availability, and performance data for major GPU cloud platforms including on-demand and reserved instances.
Tracking GPU cloud providers, inference APIs, MLOps platforms, and compute pricing across the AI stack. Automated weekly updates from public sources.
View Dataset on GitHub →Pricing, availability, and performance data for major GPU cloud platforms including on-demand and reserved instances.
Latency, throughput, and cost comparisons across hosted inference endpoints for frontier and open-source models.
Feature tracking for experiment management, model registries, deployment pipelines, and monitoring tools.
Historical and current pricing trends for AI training and inference across major cloud providers.
Data is collected weekly via automated pipelines from official provider documentation, public APIs, and community benchmarks. All collection scripts are open-source and auditable.