Back to Calculator Deploy on RunPodDeploy Now
Ling 2.6-1T
A large language model from inclusionAI, likely with 2.6 billion parameters and potentially trained on 1 trillion tokens.
Specifications
SourceArchitectureLLM
Parameters2600000000B
FamilyLing
VRAM (Q4)1300000000.0G
LLMLanguage ModelText Generation2.6B
Run in the Cloud
This model requires enterprise-grade VRAM. Rent GPUs on RunPod and start generating.
Instant Cloud GPUs
Running out of VRAM? Rent a high-end H100 or RTX 4090 on RunPod and deploy in seconds.
Quantization Estimates
| Format | VRAM Need | Tier |
|---|---|---|
| FP16 | 5200000000.0 GB | Full Precision |
| Q8_0 | 2600000000.0 GB | High |
| Q6_K | 2210000000.0 GB | Excellent |
| Q5_K_M | 1820000000.0 GB | Great |
| Q4_K_M | 1300000000.0 GB | Sweet Spot |
| Q2_K | 780000000.0 GB | Emergency |
Share this Model
Send these specs directly to your community.