
GpuPerHour
Real-time pricing for H100, A100, and RTX 4090 instances across all clouds.

About GpuPerHour
Stop Overpaying for Cloud Compute
GPUperHour is the ultimate real-time cloud GPU price comparison tool designed for AI researchers, machine learning engineers, and startup founders. Finding affordable compute power is incredibly fragmented today, with the exact same GPU models varying wildly in price—sometimes by up to 13x—across different hyperscalers and GPU marketplaces. GPUperHour solves this by tracking live pricing, hardware specs, and stock availability across the entire cloud compute ecosystem.
Key Features
Real-Time Aggregation: We track live pricing and availability for over 50 different GPU models, including highly sought-after chips like the NVIDIA H100, A100, RTX 4090, RTX 5090, L40S, and V100.
Massive Provider Network: Compare active offers across 30+ cloud infrastructure providers. We track hyperscalers (AWS, GCP, Azure) alongside specialized AI GPU clouds (RunPod, Vast.ai, Lambda Labs, CoreWeave, TensorDock, VERDA, Crusoe, and more).
Advanced Filtering: Easily sort and filter available instances by GPU model, total VRAM, geographic region, and instance type (spot vs. on-demand vs. reserved).
True Cost Visibility: Stop getting tricked by confusing pricing pages. We aggregate the data so you can compare apples-to-apples hourly rates in seconds.
Why Use GPUperHour?
The pricing gap in the GPU market is massive. For example, the hourly rate for an NVIDIA H100 SXM5 80GB can range from $0.80/hr on the lowest end to $11.10/hr on the highest end—a $7,400/month difference for the exact same hardware.
Whether you are spinning up a single RTX 4090 for a weekend Stable Diffusion project or securing a cluster of H100s for foundation model training, GPUperHour ensures you never burn your runway on overpriced compute. Find your perfect instance and scale your AI workflows today.















