Llama 3.1 405B
Llama 3.1 405B works with 2 GPUs via CPU offloading. Precise VRAM thresholds and benchmarks below.
llama.cpp 0.2.x · CUDA 12 · ROCm 6 · Updated monthly · methodology →
Execution Context
This model requires a Flagship GPU (48 GB+ VRAM)
Cómo ejecutar este modelo
Check if your GPU can run Llama 3.1 405B →
VRAM Calculator — instant compatibility check
M4 Ultra
128 GB · Works with offloading · Check availability
*Prices and availability may change. Some links are affiliate links.
System Requirements
VRAM by Quantization
| Quantization | VRAM needed | Disk space | Quality |
|---|---|---|---|
| FP16 (max quality) | 810 GB | 810 GB | Maximum |
| Q8 (high quality) | 405 GB | 405 GB | Near-lossless |
| Q4 (recommended) Best balance | 230 GB | 230 GB | Recommended |
| Q2 (minimum) | 115 GB | 115 GB | Quality loss |
Model Details
| Developer | Meta |
| Parameters | 405B |
| Context window | 131,072 tokens |
| License | llama-3.1-community |
| Use cases | chat, coding, reasoning, analysis, research |
| Released | 2024-07 |
Install with Ollama
ollama run llama3.1:405b Hugging Face
meta-llama/Llama-3.1-405B-Instruct Can your GPU run Llama 3.1 405B?
Llama 3.1 405B needs 230 GB VRAM at Q4 — no consumer GPU fits fully. 2 GPUs work with CPU offloading at Q2 (115 GB).
Hardware Performance Matrix
0 Q4 native · 2 offload · 38 unsupported
Recommended GPUs for Llama 3.1 405B
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
M4 Ultra
128 GB VRAM
Check availability →
M3 Ultra
192 GB VRAM
Check availability →
RTX 5090
32 GB VRAM
Check availability →
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
Llama 3.1 405B — Compatibility guide
Llama 3.1 405B with 405B parameters only runs fully in multi-GPU or server configurations. Consider distilled versions if available. The VRAM calculator can help you find compatible alternatives.
Compatible Hardware
GPUs that run Llama 3.1 405B at Q4 — sorted by AI performance score.
No consumer GPUs have enough VRAM for this model.
Consider distilled versions or Q2 quantization.
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
More Practical Alternatives
Similar models in the chat category with comparable VRAM footprints.
Not sure which GPU you need for Llama 3.1 405B?
The VRAM Calculator tells you exactly which quantization your hardware can handle.