What Do I
Need For...
Hardware guides by use case. Direct answers with recommended setup, real benchmarks, and purchase links.
Run Llama locally
GPU with at least 8 GB of VRAM. RTX 3060 12GB hits 30 tok/s on Llama 8B Q4.
See recommended setups →Generate images with AI
GPU with 8 GB of VRAM for SDXL, 12 GB for Flux.1. ComfyUI or Automatic1111.
See recommended setups →Code with local AI
8 GB GPU, Qwen 2.5 Coder or DeepSeek Coder, VS Code with Continue.
See recommended setups →Run AI offline
Any GPU with Ollama works without a connection. Even CPU-only for small models.
See recommended setups →Build an affordable AI workstation
Complete builds by budget. RTX 3060 or 3090, 32–64 GB RAM, fast NVMe.
See recommended builds →Los precios y disponibilidad pueden cambiar. Enlaces de afiliado.
RTX 4060
LLMs 7B, conversación 30+ tok/s, generación de imágenes básica
RTX 4060 Ti 16GB
LLMs 13B–34B, SDXL, fine-tuning básico
Can't find your use case? Use the wizard for a personalized recommendation.
Use the hardware wizard →