LocalOps
Back to Calculator

LFM2 3B

Liquid AI's 3B hybrid model for scalable server deployment with strong multilingual capabilities

Model Specifications

ArchitectureTEXT
Parameters3B
Familylfm
VRAM (Q4)1.5GB
#liquid#efficient#hybridSource

Estimated Quantization Sizes

FormatPrecisionEst. VRAMRecommendation
FP16 / BF1616-bit6.0 GBUncompressed Base
Q8_0High8-bit3.0 GBNear Lossless
Q6_K6-bit2.3 GBExcellent Balance
Q4_K_MPopular4-bit1.5 GBStandard Use

Share this Model

Send this model's specs directly to your community.

Post