Qwen3 1.7B
40 consumer GPUs can run Qwen3 1.7B at Q4 natively. Precise VRAM thresholds and benchmarks below.
llama.cpp 0.2.x · CUDA 12 · ROCm 6 · Updated monthly · methodology →
Execution Context
This model requires a Entry GPU (8 GB VRAM)
Cómo ejecutar este modelo
CPU vs GPU for Qwen3 1.7B →
VRAM Calculator — instant compatibility check
RTX 5090
32 GB · Runs Q4 natively · Check availability
*Prices and availability may change. Some links are affiliate links.
System Requirements
VRAM by Quantization
| Quantization | VRAM needed | Disk space | Quality |
|---|---|---|---|
| FP16 (max quality) | 3.7 GB | 3.4 GB | Maximum |
| Q8 (high quality) | 1.9 GB | 1.7 GB | Near-lossless |
| Q4 (recommended) Best balance | 0.9 GB | 0.9 GB | Recommended |
| Q2 (minimum) | 0.5 GB | 0.4 GB | Quality loss |
Model Details
| Developer | Alibaba |
| Parameters | 1.7B |
| Context window | 131,072 tokens |
| License | Apache 2.0 |
| Use cases | chat, reasoning |
| Released | 2025-04 |
Install with Ollama
ollama run qwen3:1.7b Hugging Face
Qwen/Qwen3-1.7B Can your GPU run Qwen3 1.7B?
Qwen3 1.7B requires 0.9 GB VRAM at Q4. 40 consumer GPUs meet this threshold. Below 8 GB or -1.1 GB you'll hit significant offload latency.
Hardware Performance Matrix
40 Q4 native · 0 offload · 0 unsupported
| GPU Unit | VRAM | Compatibility | Est. Speed | Action |
|---|---|---|---|---|
| RTX 5090 | 32GB | Optimal | 330 tok/s | Calculate → |
| RTX 4090 | 24GB | Optimal | 330 tok/s | Calculate → |
| M4 Ultra | 128GB | Optimal | 330 tok/s | Calculate → |
| RTX 5080 | 16GB | Optimal | 330 tok/s | Calculate → |
| M3 Ultra | 192GB | Optimal | 319 tok/s | Calculate → |
| RTX 4080 Super | 16GB | Optimal | 305 tok/s | Calculate → |
| RTX 5070 Ti | 16GB | Optimal | 326 tok/s | Calculate → |
| RTX 3090 | 24GB | Optimal | 329 tok/s | Calculate → |
| M4 Max 48GB | 48GB | Optimal | 227 tok/s | Calculate → |
| RX 7900 XTX | 24GB | Optimal | 330 tok/s | Calculate → |
| M4 Max 36GB | 36GB | Optimal | 227 tok/s | Calculate → |
| RTX 4070 Ti Super | 16GB | Optimal | 279 tok/s | Calculate → |
| RTX 3080 Ti | 12GB | Optimal | 323 tok/s | Calculate → |
| RX 7900 XT | 20GB | Optimal | 319 tok/s | Calculate → |
| RTX 5070 | 12GB | Optimal | 279 tok/s | Calculate → |
| RTX 3080 | 10GB | Optimal | 315 tok/s | Calculate → |
| M4 Pro | 24GB | Optimal | 113 tok/s | Calculate → |
| RX 7800 XT | 16GB | Optimal | 259 tok/s | Calculate → |
| RX 6800 XT | 16GB | Optimal | 213 tok/s | Calculate → |
| RTX 4070 | 12GB | Optimal | 209 tok/s | Calculate → |
| RTX 4060 Ti 16GB | 16GB | Optimal | 119 tok/s | Calculate → |
| RX 7700 XT | 12GB | Optimal | 179 tok/s | Calculate → |
| RTX 3070 Ti | 8GB | Optimal | 253 tok/s | Calculate → |
| RTX 4060 Ti | 8GB | Optimal | 119 tok/s | Calculate → |
| RTX 3070 | 8GB | Optimal | 186 tok/s | Calculate → |
| RX 6700 XT | 12GB | Optimal | 160 tok/s | Calculate → |
| M3 Pro | 18GB | Optimal | 63 tok/s | Calculate → |
| RTX 3060 Ti | 8GB | Optimal | 186 tok/s | Calculate → |
| RTX 2080 Ti | 11GB | Optimal | 186 tok/s | Calculate → |
| RTX 3060 | 12GB | Optimal | 149 tok/s | Calculate → |
| M2 Pro | 16GB | Optimal | 83 tok/s | Calculate → |
| RTX 4060 | 8GB | Optimal | 113 tok/s | Calculate → |
| Arc A770 16GB | 16GB | Optimal | 93 tok/s | Calculate → |
| M1 Pro | 16GB | Optimal | 83 tok/s | Calculate → |
| RX 7600 | 8GB | Optimal | 120 tok/s | Calculate → |
| RX 6600 XT | 8GB | Optimal | 113 tok/s | Calculate → |
| Arc A750 8GB | 8GB | Optimal | 85 tok/s | Calculate → |
| RX 6600 | 8GB | Optimal | 102 tok/s | Calculate → |
| RTX 3050 8GB | 8GB | Optimal | 93 tok/s | Calculate → |
| GTX 1660 Super | 6GB | Optimal | 139 tok/s | Calculate → |
Recommended GPUs for Qwen3 1.7B
Best picks by compatibility, VRAM headroom, and value — prices and availability may change.
RTX 5090
32 GB VRAM
Check availability →
RTX 4090
24 GB VRAM
Check availability →
M4 Ultra
128 GB VRAM
Check availability →
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
Qwen3 1.7B — Compatibility guide
Qwen3 1.7B is an edge model that runs directly on CPU — no GPU required. On an i7-13700K with llama.cpp Q4 it reaches 35 tokens/second, enough for real-time chat. With a GPU you get up to ~126 tok/s with 6 GB VRAM. Ideal for laptops and desktops without a dedicated graphics card.
Compare GPUs for Qwen3 1.7B
Which GPU is worth it? Real specs and benchmarks side by side.
Compatible Hardware
GPUs that run Qwen3 1.7B at Q4 — sorted by AI performance score.
Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.
More Practical Alternatives
Similar models in the chat category with comparable VRAM footprints.
Not sure which GPU you need for Qwen3 1.7B?
The VRAM Calculator tells you exactly which quantization your hardware can handle.