Back to Calculator
Gemma 4 26B A4B
HotGoogle's Gemma 4 MoE model — 26B total parameters with only 4B active, runs almost as fast as a 4B model. #6 open model on Arena AI
Model Specifications
ArchitectureVISION
Parameters26B
Familygemma
VRAM (Q4)13.0GB
Mixture of ExpertsActive inference parameters: 4B.
MoE architecture. 256K context. LMArena score 1441. Apache 2.0
Share this Model
Send this model's specs directly to your community.
Similar Models
Related Guides
How much VRAM do you really need?
A complete breakdown of quantization levels and VRAM overhead for running local models.
Best GPUs for Machine Learning in 2026
Comparing NVIDIA and AMD options for the best speed-to-dollar ratio.
GGUF vs EXL2 vs AWQ
Understanding local AI formats and which one to pick for your specific hardware.