RTX 5090
Prós
- Runs Magistral Small 24B at Q4 natively
- 32 GB VRAM — adequate headroom
21 consumer GPUs can run Magistral Small 24B at Q4 natively. Precise VRAM thresholds and benchmarks below.
Prices and availability may change · affiliate link
llama.cpp 0.2.x · CUDA 12 · ROCm 6 · updated monthly · methodology →
This model requires aMid-range GPU (16 GB VRAM)
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
Prós
Prós
Prós
Alguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
CPU vs GPU for Magistral Small 24B →
VRAM Calculator — instant compatibility check
RTX 5090
32 GB · Runs Q4 natively · Check availability
*Prices and availability may change. Some links are affiliate links.
| Quantization | VRAM needed | Disk space | Quality |
|---|---|---|---|
| FP16 (max quality) | 52.8 GB | 48 GB | Maximum |
| Q8 (high quality) | 26.4 GB | 24 GB | Near-lossless |
| Q4 (recommended) Best balance | 13.2 GB | 12 GB | Recommended |
| Q2 (minimum) | 6.6 GB | 6 GB | Quality loss |
| Developer | Mistral AI |
| Parameters | 24B |
| Context window | 128,000 tokens |
| License | Apache 2.0 |
| Use cases | chat, reasoning, analysis |
| Released | 2025-06 |
Install with Ollama
ollama run magistral:24b Hugging Face
mistralai/Magistral-Small-2506 Magistral Small 24B requires <strong class="text-primary-container">13.2 GB VRAM</strong> at Q4. 21 consumer GPUs meet this threshold. Below 8 GB or 11.2 GB you'll hit significant offload latency.
21 Q4 native · 18 offload
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
RTX 5090
32 GB VRAM
Check availability →
RTX 4090
24 GB VRAM
Check availability →
M4 Ultra
128 GB VRAM
Check availability →
Alguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
Magistral Small 24B can run on CPU without a dedicated GPU — unusual for a 24B model. On an i7-13700K with llama.cpp Q4 it reaches 5 tok/s (slow but usable). With a GPU you get 4–6× more speed — check the VRAM calculator for specifics.
Which GPU is worth it? Real specs and benchmarks side by side.
GPUs that run Magistral Small 24B at Q4 — sorted by AI performance score.
Alguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
Similar models in the chat category with comparable VRAM footprints.
The VRAM Calculator tells you exactly which quantization your hardware can handle.
RTX 5090
Preços mudam diariamente