RTX 3080 Ti
60 AI models fit in 12 GB VRAM at Q4 native. 21 more run with CPU offloading. Real benchmarks below.
llama.cpp 0.2.x · CUDA 12 · Ubuntu 22.04 · Prices verified on Amazon · methodology →
Execution Context
Ver oferta atual
Amazon affiliate link for RTX 3080 Ti
Alguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
Full Specifications
NVIDIA · 2021-06
| VRAM | 12 GB GDDR6X |
| Bandwidth | 912 GB/s |
| FP16 TFLOPS | 34.1 |
| AI Score | 62 / 140 |
| CUDA Cores | 10,240 |
| Tensor Cores | 320 |
| TDP | 350 W |
| PCIe | Gen 4 |
| Slots | 3 |
| Power Connector | 2x 8-pin |
| Price Band | Gama alta |
| Released | 2021-06 |
AI Benchmarks
Real inference measurements — llama.cpp Q4_K_M
| Task | Result |
|---|---|
| Llama 1B Q4 | 400 tok/s |
| Llama 3B Q4 | 181 tok/s |
| Llama 7B Q4 | 60 tok/s |
| Llama 13B Q4 | 33 tok/s |
| Llama 30B Q4 | VRAM N/A |
| Llama 70B Q4 | Offload or multi-GPU |
| Stable Diffusion 512px | 4.2s / img |
| Whisper Large RTF | 0.5x |
RTF < 1.0 = faster than real time. For Stable Diffusion and Whisper lower is better; for tokens/s higher is better.
Compare RTX 3080 Ti with another GPU
Is an upgrade worth it? Compare specs and real benchmarks side by side.
Open comparator →Compatible AI Models — RTX 3080 Ti
60 models run fully in VRAM · 21 with CPU offloading
Flux.1 Dev
Whisper Large V3
Stable Diffusion 3.5 Large
Stable Diffusion 3.5 Medium
Phi-4
Stable Diffusion 3 Medium
Flux.1 Schnell
DeepSeek R1 Distill 14B
Show all 60 compatible models →
Also runs with CPU offloading (21)
- FLUX.2 Dev 8.8 GB Q2
- Qwen2.5-Coder 32B 9.6 GB Q2 How to install →
- DeepSeek R1 Distill 32B 9.6 GB Q2 How to install →
- Qwen2.5 32B 9.6 GB Q2 How to install →
- Gemma 4 27B 7.4 GB Q2 How to install →
- Qwen3.5 35B-A3B 9.6 GB Q2 How to install →
- Gemma 2 27B 8 GB Q2 How to install →
- Gemma 3 27B 8.1 GB Q2 How to install →
- Gemma 4 31B 8.5 GB Q2 How to install →
- Mistral Small 3 7.2 GB Q2 How to install →
- Qwen3 32B 8.8 GB Q2 How to install →
- Qwen3-Coder 30B-A3B 8.3 GB Q2 How to install →
- Qwen3 30B-A3B 8.3 GB Q2 How to install →
- Devstral Small 2 24B 6.6 GB Q2 How to install →
- Qwen3.5 27B 7.4 GB Q2 How to install →
- Magistral Small 24B 6.6 GB Q2 How to install →
- CodeLlama 34B 10 GB Q2 How to install →
- Yi 1.5 34B 10 GB Q2 How to install →
- Mistral Small 3.2 6.6 GB Q2 How to install →
- Mistral Small 3.1 6.6 GB Q2 How to install →
- Phi-3.5 MoE 11 GB Q2 How to install →
RTX 3080 Ti · Amazon
Os precos de GPU mudam com frequencia entre lojas. Consulte a oferta atual antes de comprar.
Ver oferta atualAlguns links são links de afiliado da Amazon. Podemos receber uma comissão sem custo adicional para si. O cookie da Amazon pode durar até 24 horas após o clique.
RTX 3080 Ti for Local AI
La RTX 3080 Ti con 12GB de GDDR6X es perfecta para empezar con IA local. Cubre todos los modelos 7B populares en Q4 (Llama 3.1 8B, Mistral 7B, Qwen2.5 7B, DeepSeek R1 Distill 8B) con velocidades decentes. Para casos de uso ligeros como chat, coding assistance y transcripción de audio, esta GPU tiene todo lo necesario.
En benchmarks reales, la RTX 3080 Ti genera 60 tokens/segundo con Llama 7B Q4 — suficiente para conversación en tiempo real. Whisper para transcripción de voz funciona perfectamente. Para generación de imágenes, Stable Diffusion 3 Medium y SD 3.5 Medium son compatibles.
Si buscas tu primera GPU para IA local con un presupuesto ajustado, la RTX 3080 Ti es un punto de entrada sólido. Consulta nuestra guía para empezar con IA local y usa la calculadora de VRAM para verificar la compatibilidad con tu modelo favorito.
Plan your full AI build
RTX 3080 Ti · 12 GB VRAM — configure PSU, RAM, storage and check compatible models.
Related articles
Not sure which model to run on your RTX 3080 Ti?
The VRAM calculator tells you exactly which quantization you need.
Get the best price for RTX 3080 Ti
Open Amazon with our affiliate link and check availability, variants, and current deals.