LocalOps
Back to Calculator

LFM2 1.2B

Liquid AI on-device model rivaling Qwen3-1.7B at 47% fewer parameters, CPU/NPU friendly

Model Specifications

ArchitectureTEXT
Parameters1.2B
Familylfm
VRAM (Q4)0.6GB
#liquid#edge#efficient#hybridSource

Estimated Quantization Sizes

FormatPrecisionEst. VRAMRecommendation
FP16 / BF1616-bit2.4 GBUncompressed Base
Q8_0High8-bit1.2 GBNear Lossless
Q6_K6-bit0.9 GBExcellent Balance
Q4_K_MPopular4-bit0.6 GBStandard Use

Share this Model

Send this model's specs directly to your community.

Post