LocalOps
Back to Calculator

GLM Z1 32B

Hot

Zhipu AI's efficient reasoning model at 32B parameters with strong math and code performance

Model Specifications

ArchitectureTEXT
Parameters32B
Familyglm
VRAM (Q4)16.0GB
#zhipu#reasoning#efficient#trendingSource

Estimated Quantization Sizes

FormatPrecisionEst. VRAMRecommendation
FP16 / BF1616-bit64.0 GBUncompressed Base
Q8_0High8-bit32.0 GBNear Lossless
Q6_K6-bit24.0 GBExcellent Balance
Q4_K_MPopular4-bit16.0 GBStandard Use

Share this Model

Send this model's specs directly to your community.

Post