Whisper Large V3
40 consumer GPUs can run Whisper Large V3 at Q4 natively. Precise VRAM thresholds and benchmarks below.
llama.cpp 0.2.x · CUDA 12 · ROCm 6 · Updated monthly · methodology →
Execution Context
This model requires a Entry GPU (8 GB VRAM)
Cómo ejecutar este modelo
Check if your GPU can run Whisper Large V3 →
VRAM Calculator — instant compatibility check
RTX 5090
32 GB · Runs Q4 natively · Check availability
*Prices and availability may change. Some links are affiliate links.
System Requirements
VRAM by Quantization
| Quantization | VRAM needed | Disk space | Quality |
|---|---|---|---|
| FP16 (max quality) | 4 GB | 3.1 GB | Maximum |
| Q8 (high quality) | 2.5 GB | 1.6 GB | Near-lossless |
| Q4 (recommended) Best balance | 1.5 GB | 0.9 GB | Recommended |
| Q2 (minimum) | 1 GB | 0.5 GB | Quality loss |
Model Details
| Developer | OpenAI |
| Parameters | 1.55B |
| License | MIT |
| Use cases | voice |
| Released | 2023-11 |
Hugging Face
openai/whisper-large-v3 Can your GPU run Whisper Large V3?
Whisper Large V3 requires 1.5 GB VRAM at Q4. 40 consumer GPUs meet this threshold. Below 8 GB or -0.5 GB you'll hit significant offload latency.
Hardware Performance Matrix
40 Q4 native · 0 offload · 0 unsupported
| GPU Unit | VRAM | Compatibility | Est. Speed | Action |
|---|---|---|---|---|
| RTX 5090 | 32GB | Optimal | 345 tok/s | Calculate → |
| RTX 4090 | 24GB | Optimal | 345 tok/s | Calculate → |
| M4 Ultra | 128GB | Optimal | 345 tok/s | Calculate → |
| RTX 5080 | 16GB | Optimal | 345 tok/s | Calculate → |
| M3 Ultra | 192GB | Optimal | 336 tok/s | Calculate → |
| RTX 4080 Super | 16GB | Optimal | 322 tok/s | Calculate → |
| RTX 5070 Ti | 16GB | Optimal | 342 tok/s | Calculate → |
| RTX 3090 | 24GB | Optimal | 344 tok/s | Calculate → |
| M4 Max 48GB | 48GB | Optimal | 240 tok/s | Calculate → |
| RX 7900 XTX | 24GB | Optimal | 345 tok/s | Calculate → |
| M4 Max 36GB | 36GB | Optimal | 240 tok/s | Calculate → |
| RTX 4070 Ti Super | 16GB | Optimal | 295 tok/s | Calculate → |
| RTX 3080 Ti | 12GB | Optimal | 340 tok/s | Calculate → |
| RX 7900 XT | 20GB | Optimal | 336 tok/s | Calculate → |
| RTX 5070 | 12GB | Optimal | 295 tok/s | Calculate → |
| RTX 3080 | 10GB | Optimal | 333 tok/s | Calculate → |
| M4 Pro | 24GB | Optimal | 119 tok/s | Calculate → |
| RX 7800 XT | 16GB | Optimal | 274 tok/s | Calculate → |
| RX 6800 XT | 16GB | Optimal | 225 tok/s | Calculate → |
| RTX 4070 | 12GB | Optimal | 221 tok/s | Calculate → |
| RTX 4060 Ti 16GB | 16GB | Optimal | 126 tok/s | Calculate → |
| RX 7700 XT | 12GB | Optimal | 190 tok/s | Calculate → |
| RTX 3070 Ti | 8GB | Optimal | 267 tok/s | Calculate → |
| RTX 4060 Ti | 8GB | Optimal | 126 tok/s | Calculate → |
| RTX 3070 | 8GB | Optimal | 196 tok/s | Calculate → |
| RX 6700 XT | 12GB | Optimal | 169 tok/s | Calculate → |
| M3 Pro | 18GB | Optimal | 66 tok/s | Calculate → |
| RTX 3060 Ti | 8GB | Optimal | 197 tok/s | Calculate → |
| RTX 2080 Ti | 11GB | Optimal | 197 tok/s | Calculate → |
| RTX 3060 | 12GB | Optimal | 158 tok/s | Calculate → |
| M2 Pro | 16GB | Optimal | 88 tok/s | Calculate → |
| RTX 4060 | 8GB | Optimal | 119 tok/s | Calculate → |
| Arc A770 16GB | 16GB | Optimal | 98 tok/s | Calculate → |
| M1 Pro | 16GB | Optimal | 88 tok/s | Calculate → |
| RX 7600 | 8GB | Optimal | 127 tok/s | Calculate → |
| RX 6600 XT | 8GB | Optimal | 119 tok/s | Calculate → |
| Arc A750 8GB | 8GB | Optimal | 89 tok/s | Calculate → |
| RX 6600 | 8GB | Optimal | 108 tok/s | Calculate → |
| RTX 3050 8GB | 8GB | Optimal | 98 tok/s | Calculate → |
| GTX 1660 Super | 6GB | Optimal | 147 tok/s | Calculate → |
Recommended GPUs for Whisper Large V3
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
RTX 5090
32 GB VRAM
Check availability →
RTX 4090
24 GB VRAM
Check availability →
M4 Ultra
128 GB VRAM
Check availability →
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
Whisper Large V3 — Compatibility guide
A lightweight model like Whisper Large V3 runs well on consumer hardware from 6 GB VRAM. Ideal for daily use with Ollama or LM Studio. Use the VRAM calculator to check your setup.
Compare GPUs for Whisper Large V3
Which GPU is worth it? Real specs and benchmarks side by side.
Compatible Hardware
GPUs that run Whisper Large V3 at Q4 — sorted by AI performance score.
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
More Practical Alternatives
Similar models in the voice category with comparable VRAM footprints.
Not sure which GPU you need for Whisper Large V3?
The VRAM Calculator tells you exactly which quantization your hardware can handle.