With the industry and on-demand market gradually shifting towards NVIDIA H100s as capacity ramps up, it's helpful to look back at NVIDIA's A100 pricing trends to forecast future H100 market dynamics. At Shadeform, our unified interface and cloud console enables you to deploy and manage your GPU fleet across providers. With this, we track GPU availability and prices across clouds to pinpoint the best place for your to run your workload.
Single NVIDIA A100 Cards:
Below we take a look and compare price and availability for Nvidia A100s across 8 clouds the past 3 months.
- Oblivus and Paperspace: These providers lead the pack in terms of availability for single A100 VMs, with availability rates at
98.33%respectively. They demonstrate a robust commitment to offering available on-demand instances, albeit at a higher price point.
- Datacrunch, Vultr, and Runpod: Strike a commendable balance between cost-effectiveness and availability. Their hourly rates range from
- Lambda Labs: Takes a unique stance, offering prices so low with practically 0 availability, it is hard to compete with their on-demand prices. More on this below.
Since the A100 was the most popular GPU for most of 2023, we expect the same trends to continue with price and availability across clouds for H100s into 2024. Lambda will likely continue to offer the lowest prices, but we expect the other clouds to continue to offer a balance between cost-effectiveness and availability. We see in the above graph a consistent trend line.
One thing to consider with these newer providers is that they have a limited geo footprint, so if you are looking for a worldwide coverage, you're still best off with the hyperscalers or using a platform like Shadeform where we unify these providers into one single platform. Also, the quality of data centers and network connectivity may not be as high as the larger providers. Interestingly, at this stage, that has not been the primary concern for customers. In this market's current cycle, chip availability reigns supreme.
8 x NVIDIA A100:
For 8 x NVIDIA A100s, which are machines that have 8 A100 GPUs connected to a single machine, we see a similar trend forming with the obvious exceptions through the hyperscalers (AWS, GCP, and Azure). With so much enterprise and internal demand in these clouds, we expect this to continue for a quite a while with H100s as well.
While ChatGPT and Grok initially were trained on A100 clusters, H100s are becoming the most desirable chip for training and increasingly for inference. We expect the same trends to continue with price and availability across clouds for H100s into 2024, and we'll continue to track the market and keep you updated.
Lambda Labs 2024 Pricing Changes
Many have speculated Lambda Labs offers the cheapest machines to build out their funnel to then upsell their reserved instances. Without knowing the internals of Lambda Labs, their on-demand offering is about 40-50% cheaper than expected prices based on our analysis. Not surprisingly, Lambda has finally raised their prices. This pricing change impacts all customers even those with instances that were started in 2023. Below is a chart looking at their most significant price increase to date.
Shadeform: A Unified Platform for the Future of High-Performance Computing
Shadeform users use all these clouds and more. We help customers get the machines they need by continually scanning the on-demand market by the second and grabbing instances as soon as they come online and having a single, easy-to-use console for all clouds. Sign up today here.