An MLX quantized version of openai/gpt-oss-120b.

The model was converted using this repository with the default quantization settings (as of February 2, 2026).

License

Apache-2.0

Downloads last month
230
Safetensors
Model size
117B params
Tensor type
BF16
·
U32
·
U8
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EricFillion/gpt-oss-120b-mlx

Quantized
(72)
this model