LocalOps LogoLocalOps
Back to Calculator

Gemma-4-E4B-it-assistant

Gemma 4 E4B instruction-tuned assistant model from Google.

Specifications

Source
ArchitectureLLM
Parameters4B
FamilyGemma
VRAM (Q4)2.0G
LLMGemmainstruction-tuned

Build your Local Rig

Ready to run locally? Shop top-tier GPUs on Amazon for the best performance.

Instant Cloud GPUs

Running out of VRAM? Rent a high-end H100 or RTX 4090 on RunPod and deploy in seconds.

Deploy Now

Quantization Estimates

FormatVRAM NeedTier
FP168.0 GBFull Precision
Q8_04.0 GBHigh
Q6_K3.4 GBExcellent
Q5_K_M2.8 GBGreat
Q4_K_M2.0 GBSweet Spot
Q2_K1.2 GBEmergency

Share this Model

Send these specs directly to your community.

Post