GPU Product
Rent NVIDIA V100 32GB GPUs
32 GB of HBM2 at 900 GB/s — the cost-leader for memory-bound AI fine-tuning, HPC, and batch inference. From $0.29/hour on sustainable, second-life Volta silicon.
Technical Specifications

V100 Rental Options
Rent NVIDIA V100 32GB GPUs from $0.29/hour. Hourly billing with no minimum commitment, plus 1-month and 3-month reservations for committed workloads. Capacity sourced from sustainable second-life datacenter hardware.
Comparison
V100 32GB vs L40
Both are datacenter GPUs in the CloudRift fleet, positioned for different workloads. The V100 wins on price (roughly half the hourly rate), HBM2 memory bandwidth, FP64 throughput, and NVLink for multi-GPU scale-out. The L40 wins on raw FP32, modern Ada Tensor Cores, and total VRAM — choose it when you need maximum inference throughput per card.
| V100 32GB | L40 | % Diff | |
|---|---|---|---|
| CloudRift Price | $0.29 / hr | $0.63 / hr | −54% |
| Architecture | Volta | Ada Lovelace | N/A |
| Memory Type | HBM2 ECC | GDDR6 ECC | N/A |
| VRAM | 32 GB | 48 GB | −33.3% |
| Bus Width | 4 096-bit | 384-bit | +966% |
| Memory Bandwidth | 900 GB/s | 864 GB/s | +4.2% |
| FP64 Performance | ~7.8 TFLOPS | ~1.4 TFLOPS | +457% |
| FP32 Performance | ~15.7 TFLOPS | ~90.5 TFLOPS | −82.7% |
| Tensor Performance | ~125 TFLOPS | ~362 TFLOPS | −65.5% |
| CUDA Cores | 5 120 | 18 176 | −71.8% |
| Tensor Cores | 640 (1st gen) | 568 (4th gen) | +12.7% |
| Multi-GPU Interconnect | NVLink 2.0 | PCIe 4.0 only | N/A |
| Form Factor | SXM3 | Dual-slot PCIe | N/A |
| TDP | 350 W | 300 W | +16.7% |
Performance
Key performance metrics
HBM2 Memory Bandwidth
900 GB/s of HBM2 bandwidth on a 4 096-bit bus — the cost-leader for memory-bound workloads like scientific compute, mid-sized LLM fine-tuning, and large-batch inference.
Tensor Core Acceleration
640 first-generation Tensor Cores deliver up to ~125 TFLOPS of mixed-precision throughput — proven silicon for CUDA, cuDNN, and mature ML toolchains.
A Fraction of Hyperscaler Pricing
$0.29/hr on CloudRift vs ~$3.90/hr per GPU on AWS p3dn.24xlarge and ~$2.75/hr per GPU on Azure NDv2 — up to 13× cheaper than hyperscalers for the same V100 32GB silicon.
NVIDIA V100 FAQ
Common Questions About the V100
Ready to get started?
Get in touch with our team to discuss your requirements and find the right solution for your infrastructure.