Back to Calculator
Codestral 2 (22B)
Mistral Apr 2026; next-gen code specialist, FIM support, beats GPT-4o on HumanEval and MBPP. First Codestral under Apache 2.0 — previous versions were non-commercial MNPL.
Model Specifications
ArchitectureTEXT
Parameters22B
Familycodestral
VRAM (Q4)11.0GB
Apache 2.0 licensed. 256K context. 380K HF downloads in first week. Runs on single RTX 4090 at Q4 (~13GB VRAM).
Estimated Quantization Sizes
| Format | Precision | Est. VRAM | Recommendation |
|---|---|---|---|
| FP16 / BF16 | 16-bit | 44.0 GB | Uncompressed Base |
| Q8_0High | 8-bit | 22.0 GB | Near Lossless |
| Q6_K | 6-bit | 16.5 GB | Excellent Balance |
| Q4_K_MPopular | 4-bit | 11.0 GB | Standard Use |
Share this Model
Send this model's specs directly to your community.
Similar Models
Related Guides
How much VRAM do you really need?
A complete breakdown of quantization levels and VRAM overhead for running local models.
Best GPUs for Machine Learning in 2026
Comparing NVIDIA and AMD options for the best speed-to-dollar ratio.
GGUF vs EXL2 vs AWQ
Understanding local AI formats and which one to pick for your specific hardware.