Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
majentik
/
gpt-oss-20b-RotorQuant-MLX-4bit
like
0
Text Generation
MLX
Safetensors
gpt_oss
rotorquant
kv-cache-quantization
gpt-oss
openai
Mixture of Experts
quantized
4bit
conversational
4-bit precision
arxiv:
2504.19874
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
gpt-oss-20b-RotorQuant-MLX-4bit
11.2 GB
Ctrl+K
Ctrl+K
1 contributor
History:
2 commits
majentik
Add MLX 4-bit quantized model
7b8e780
verified
1 day ago
.gitattributes
Safe
1.57 kB
Add MLX 4-bit quantized model
1 day ago
README.md
3.81 kB
Add MLX 4-bit quantized model
1 day ago
chat_template.jinja
Safe
16.7 kB
Add MLX 4-bit quantized model
1 day ago
config.json
Safe
34 kB
Add MLX 4-bit quantized model
1 day ago
generation_config.json
Safe
177 Bytes
Add MLX 4-bit quantized model
1 day ago
model-00001-of-00003.safetensors
Safe
5.31 GB
xet
Add MLX 4-bit quantized model
1 day ago
model-00002-of-00003.safetensors
Safe
5.26 GB
xet
Add MLX 4-bit quantized model
1 day ago
model-00003-of-00003.safetensors
Safe
608 MB
xet
Add MLX 4-bit quantized model
1 day ago
model.safetensors.index.json
Safe
67 kB
Add MLX 4-bit quantized model
1 day ago
tokenizer.json
Safe
27.9 MB
xet
Add MLX 4-bit quantized model
1 day ago
tokenizer_config.json
Safe
351 Bytes
Add MLX 4-bit quantized model
1 day ago