Back to CalculatorDeploy Now
Laguna-XS.2
Laguna XS.2, a large language model from Poolside.
Build your Local Rig
Ready to run locally? Shop top-tier GPUs on Amazon for the best performance.
Instant Cloud GPUs
Running out of VRAM? Rent a high-end H100 or RTX 4090 on RunPod and deploy in seconds.
Quantization Estimates
| Format | VRAM Need | Tier |
|---|---|---|
| FP16 | 14.0 GB | Full Precision |
| Q8_0 | 7.0 GB | High |
| Q6_K | 6.0 GB | Excellent |
| Q5_K_M | 4.9 GB | Great |
| Q4_K_M | 3.5 GB | Sweet Spot |
| Q2_K | 2.1 GB | Emergency |
Share this Model
Send these specs directly to your community.
Similar Models
Ling 2.6-1T
2600000000BA large language model from inclusionAI, likely with 2.6 billion parameters and potentially trained on 1 trillion tokens.
DeepSeek-V4-Flash
236BDeepSeek V4 Flash, a fast large language model from DeepSeek AI.
DeepSeek-V4-Pro
236BDeepSeek V4 Pro, a large language model from DeepSeek AI.