LocalOps
Back to Calculator

Magistral Small

Hot

Mistral AI's first open-weight reasoning model — 24B parameters under Apache 2.0. Chain-of-thought reasoning in 20+ languages, 70.7% on AIME2024. Fits on a single RTX 4090 or 32GB MacBook.

Model Specifications

ArchitectureTEXT
Parameters24B
Familymagistral
VRAM (Q4)12.0GB
128K context window but performance may degrade past 40K tokens. Excellent for domain-specific reasoning tasks.
#mistral#reasoning#apache2#multilingual#math#trendingSource

Estimated Quantization Sizes

FormatPrecisionEst. VRAMRecommendation
FP16 / BF1616-bit48.0 GBUncompressed Base
Q8_0High8-bit24.0 GBNear Lossless
Q6_K6-bit18.0 GBExcellent Balance
Q4_K_MPopular4-bit12.0 GBStandard Use

Share this Model

Send this model's specs directly to your community.

Post