96 GB Blackwell Power for Enterprise-Scale AI
With 96 GB of GDDR7 and next-gen Blackwell AI engines, the RTX PRO 6000 lets you train and serve multi-billion-parameter LLMs on a single card.
Harness 96 GB of VRAM for your largest AI or rendering jobs without upfront hardware costs. Flexible hourly or reserved-term pricing available.
RTX PRO 6000 | RTX 5090 | % Diff | |
---|---|---|---|
Architecture | Blackwell | Blackwell | N/A |
Process Tech | TSMC 4 nm | TSMC 4 nm | N/A |
Transistors | ≈110 B | 92.2 B | +19.3% |
Compute Units (SMs) | 188 | 170 | +10.6% |
Shaders (CUDA) | 24 064 | 21 760 | +10.6% |
Tensor Cores | 752 | 680 | +10.6% |
RT Cores | 188 | 170 | +10.6% |
ROPs | 216 | 192 | +12.5% |
TMUs | 752 | 680 | +10.6% |
Boost Clock | 2 617 MHz | 2 407 MHz | +4.35% |
Memory Type | GDDR7 ECC | GDDR7 | N/A |
VRAM | 96 GB | 32 GB | +200% |
Bus Width | 512-bit | 512-bit | 0% |
VRAM Speed | 28 Gbps | 28 Gbps | 0% |
Bandwidth | 1 792 GB/s | 1 790 GB/s | +0.1% |
TDP | 600 W | 575 W | +4.35% |
PCIe | PCIe 5.0 ×16 | PCIe 5.0 ×16 | N/A |
96 GB of GDDR7 lets you fit multi-billion-parameter LLMs and complex 3D scenes on a single card.
Fifth-gen Tensor and fourth-gen RT cores deliver >125 TFLOPS FP32 and up to 4 000 AI TOPS.
Double host-to-GPU bandwidth vs PCIe 4.0, eliminating data-transfer bottlenecks in large-scale workloads.
Train or serve 70–175 B-parameter language models without multi-GPU partitioning.
Real-time ray-tracing and neural render pipelines for AR/VR.
Video upscaling, diffusion, and texture synthesis at scale.
Accelerate molecular dynamics, CFD, and large graph analytics.
We're here to support your compute and AI needs. Let us know if you're looking to:
Businesses of any size are welcome.