LocalOps LogoLocalOps
Back to Calculator

o4-mini

Compact OpenAI reasoning model with strong performance at lower cost

View API Docs

Specifications

Source
ArchitectureTEXT
Parameters-
Familyo-series
VRAM (Q4)API
reasoningopenaiefficient

Build your Local Rig

Ready to run locally? Shop top-tier GPUs on Amazon for the best performance.

Instant Cloud GPUs

Running out of VRAM? Rent a high-end H100 or RTX 4090 on RunPod and deploy in seconds.

Deploy Now

Share this Model

Send these specs directly to your community.

Post