Skip to main content
Local Engine Ready

Llama 3.2 90B Vision

2 consumer GPUs can run Llama 3.2 90B Vision at Q4 natively. Precise VRAM thresholds and benchmarks below.

2 Compatible GPUs
3 with offloading
90B params
131K context
Javier Morales
Javier Morales AI Hardware Specialist — 8 years experience
GitHub: github.com/javier-morales-ia

llama.cpp 0.2.x · CUDA 12 · ROCm 6 · Updated monthly · methodology →

Execution Context

ARCHITECTURE TRANSFORMER
CONTEXT 131K TOKENS
QUANTIZATION 4-BIT GGUF
PROVIDER Meta
LICENSE llama-3.2-community
VRAM REQUIREMENT
54 GB
4GB 8GB 12GB 16GB 24GB+
Hardware Decision

This model requires a Flagship GPU (48 GB+ VRAM)

Minimum

M4 Ultra

Runs at Q4 — functional, some wait

128 GB VRAM
View compatible setup
Balanced

M3 Ultra

Best value for daily use

192 GB VRAM
View compatible setup
Optimal

M4 Ultra

Full quality, fastest inference

128 GB VRAM
View compatible setup

*Prices and availability may change. Some links are affiliate links.

System Requirements

GPU VRAM 54 GB High-end GPU
System RAM 81 GB 64 GB or more
Storage 54 GB Q4 · SSD recommended
CPU Any modern CPU GPU required

VRAM by Quantization

Quantization VRAM needed Disk space Quality
FP16 (max quality) 180 GB 180 GB Maximum
Q8 (high quality) 90 GB 90 GB Near-lossless
Q4 (recommended) Best balance 54 GB 54 GB Recommended
Q2 (minimum) 27 GB 27 GB Quality loss

Model Details

Developer Meta
Parameters 90B
Context window 131,072 tokens
License llama-3.2-community
Use cases vision, multimodal, chat, image-analysis
Released 2024-09

Install with Ollama

ollama run llama3.2-vision:90b

Hugging Face

meta-llama/Llama-3.2-90B-Vision-Instruct
View on HF →
Technical Requirements

Can your GPU run Llama 3.2 90B Vision?

Llama 3.2 90B Vision requires 54 GB VRAM at Q4. 2 consumer GPUs meet this threshold. Below 8 GB or 52 GB you'll hit significant offload latency.

27GB Critical min
54GB Optimal Q4
90GB High Quality Q8
180GB Max FP16

Hardware Performance Matrix

2 Q4 native · 3 offload · 35 unsupported

GPU Unit VRAM Compatibility Est. Speed Action
M4 Ultra 128GB Optimal 45 tok/s Calculate →
M3 Ultra 192GB Optimal 38 tok/s Calculate →
RTX 5090 32GB Offload Calculate →
M4 Max 48GB 48GB Offload 20 tok/s Calculate →
M4 Max 36GB 36GB Offload Calculate →
RTX 4090 24GB N/A Calculate →
RTX 5080 16GB N/A Calculate →
RTX 4080 Super 16GB N/A Calculate →
RTX 5070 Ti 16GB N/A Calculate →
RTX 3090 24GB N/A Calculate →
RX 7900 XTX 24GB N/A Calculate →
RTX 4070 Ti Super 16GB N/A Calculate →
RTX 3080 Ti 12GB N/A Calculate →
RX 7900 XT 20GB N/A Calculate →
RTX 5070 12GB N/A Calculate →
RTX 3080 10GB N/A Calculate →
M4 Pro 24GB N/A Calculate →
RX 7800 XT 16GB N/A Calculate →
RX 6800 XT 16GB N/A Calculate →
RTX 4070 12GB N/A Calculate →
RTX 4060 Ti 16GB 16GB N/A Calculate →
RX 7700 XT 12GB N/A Calculate →
RTX 3070 Ti 8GB N/A Calculate →
RTX 4060 Ti 8GB N/A Calculate →
RTX 3070 8GB N/A Calculate →
RX 6700 XT 12GB N/A Calculate →
M3 Pro 18GB N/A Calculate →
RTX 3060 Ti 8GB N/A Calculate →
RTX 2080 Ti 11GB N/A Calculate →
RTX 3060 12GB N/A Calculate →
M2 Pro 16GB N/A Calculate →
RTX 4060 8GB N/A Calculate →
Arc A770 16GB 16GB N/A Calculate →
M1 Pro 16GB N/A Calculate →
RX 7600 8GB N/A Calculate →
RX 6600 XT 8GB N/A Calculate →
Arc A750 8GB 8GB N/A Calculate →
RX 6600 8GB N/A Calculate →
RTX 3050 8GB 8GB N/A Calculate →
GTX 1660 Super 6GB N/A Calculate →

Recommended GPUs for Llama 3.2 90B Vision

Benchmarks Reales
Sin Reviews Pagadas
Seleccion Editorial
Basado en Datos

Best picks by compatibility, VRAM headroom, and value — prices and availability may change.

Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.

Llama 3.2 90B Vision — Compatibility guide

Llama 3.2 90B Vision with 90B parameters only runs fully in multi-GPU or server configurations. Consider distilled versions if available. The VRAM calculator can help you find compatible alternatives.

Compare GPUs for Llama 3.2 90B Vision

Which GPU is worth it? Real specs and benchmarks side by side.

Compatible Hardware

GPUs that run Llama 3.2 90B Vision at Q4 — sorted by AI performance score.

Benchmarks Reales
Sin Reviews Pagadas
Basado en Datos
M4 Ultra

Apple · 128 GB VRAM

Q4 OK
45 tok/s > $1000
M3 Ultra

Apple · 192 GB VRAM

Q4 OK
38 tok/s > $1000

Some links are Amazon affiliate links. We may earn a commission at no extra cost to you. Amazon cookies may last up to 24 hours after your click.

More Practical Alternatives

Similar models in the vision category with comparable VRAM footprints.

Not sure which GPU you need for Llama 3.2 90B Vision?

The VRAM Calculator tells you exactly which quantization your hardware can handle.