Back to Calculator
InternLM3 70B
Shanghai AI Lab's flagship open model with strong reasoning and tool use capabilities across 70B parameters
Model Specifications
ArchitectureTEXT
Parameters70B
Familyinternlm
VRAM (Q4)35.0GB
Estimated Quantization Sizes
| Format | Precision | Est. VRAM | Recommendation |
|---|---|---|---|
| FP16 / BF16 | 16-bit | 140.0 GB | Uncompressed Base |
| Q8_0High | 8-bit | 70.0 GB | Near Lossless |
| Q6_K | 6-bit | 52.5 GB | Excellent Balance |
| Q4_K_MPopular | 4-bit | 35.0 GB | Standard Use |
Share this Model
Send this model's specs directly to your community.
Similar Models
Related Guides
How much VRAM do you really need?
A complete breakdown of quantization levels and VRAM overhead for running local models.
Best GPUs for Machine Learning in 2026
Comparing NVIDIA and AMD options for the best speed-to-dollar ratio.
GGUF vs EXL2 vs AWQ
Understanding local AI formats and which one to pick for your specific hardware.