LocalOps LogoLocalOps
Back to Calculator

nanowhale

A 100M parameter language model from HuggingFaceTB.

Specifications

Source
ArchitectureLLM
Parameters100B
Familynanowhale
VRAM (Q4)50.0G
text-generationsmallhuggingfacetb

Build your Local Rig

Ready to run locally? Shop top-tier GPUs on Amazon for the best performance.

Instant Cloud GPUs

Running out of VRAM? Rent a high-end H100 or RTX 4090 on RunPod and deploy in seconds.

Deploy Now

Quantization Estimates

FormatVRAM NeedTier
FP16200.0 GBFull Precision
Q8_0100.0 GBHigh
Q6_K85.0 GBExcellent
Q5_K_M70.0 GBGreat
Q4_K_M50.0 GBSweet Spot
Q2_K30.0 GBEmergency

Share this Model

Send these specs directly to your community.

Post