Skip to main content
Hardware Tools v2.4

AI Hardware
Tools

Free tools to plan, compare, and optimize your AI hardware setup. No sign-up, no downloads — just precise VRAM math and real benchmarks.

8
Free tools
40
GPUs indexed
94
Models
0
Sign-up
Live catalog snapshot · releases through Apr 2026

Live Catalog Trust Strip

Tool outputs are grounded in live model and GPU records, so fit checks prioritize decision accuracy over inflated catalog-size claims.

Check compatibility
Javier Morales Especialista en Hardware e IA Local — 8 años de experiencia
GitHub: github.com/javier-morales-ia
With 8 GB VRAM you can run any 7B model at Q4; 16 GB unlocks 14B at Q4 or 7B at Q8; 24 GB opens the door to 30B models

These are the three real VRAM thresholds for local AI in 2026. Below 8 GB, only small models like Phi-3 Mini or Gemma 2B. Above 24 GB, you can run Llama 3.1 70B with partial offloading.

— RunAIatHome Hardware Tools — validated VRAM thresholds

Why RunAIatHome ships local-AI-specific tooling

Running AI models on your own hardware forces technical decisions that simply don't exist in cloud AI. How much VRAM does the model you want actually need? Can your GPU load it fully, or will it offload to system RAM? Is it worth investing in a more expensive GPU if your current one already works? When does hardware pay back against API spend?

These calculators and interactive tools are built to answer those questions with real data, not vague estimates. From the VRAM calculator — which computes exact memory consumption for any model and quantization — to the budget planner — which surfaces the optimal build for every investment tier — every tool is designed for the enthusiast who wants to make informed technical decisions.

How to use the tools in order

  1. 1. Start with the GPU Quiz if you don't yet know what hardware you need. 5 questions about your use case, budget, and OS and you get a personalized recommendation with a direct purchase link.
  2. 2. Use the GPU Finder if you already know what kind of model you want to run (7B, 13B, 70B, image) but not which GPU to buy. It ranks options by real-world performance on that model class.
  3. 3. Check compatibility with the VRAM Calculator before you download a model. Plug in your GPU and the target model to see if it fits in VRAM or if you need to drop the quantization.
  4. 4. Run the ROI numbers with the Cost Calculator if you're justifying the spend. It shows what you currently pay for APIs and how many months it takes the hardware to break even.

Minimum recommended hardware to get started

If you don't have hardware yet and you're evaluating whether to start with local AI, here's the executive summary:

  • GPU: 8 GB of VRAM minimum for 7B models. 12–16 GB recommended if you want to experiment with 13B models. The RTX 3060 12 GB is the community's most popular entry point.
  • System RAM: 32 GB minimum. 16 GB can work but you'll see swapping during model load.
  • Storage: NVMe SSD, at least 1 TB. Models range from 4 GB (7B at Q4) to 40+ GB (70B at Q4). An HDD slows model loading significantly.
  • Software: Ollama (recommended for beginners), LM Studio (GUI), or llama.cpp (maximum control). All free and open source.