Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
majentik
/
Nemotron-3-Nano-4B-RotorQuant-MLX-2bit
like
0
Text Generation
MLX
Safetensors
nemotron_h
rotorquant
kv-cache-quantization
nemotron
nvidia
mamba2
hybrid
quantized
2bit
conversational
custom_code
2-bit
arxiv:
2504.19874
License:
nvidia-open-model-license
Model card
Files
Files and versions
xet
Community
Use this model
main
Nemotron-3-Nano-4B-RotorQuant-MLX-2bit
1.26 GB
Ctrl+K
Ctrl+K
1 contributor
History:
3 commits
majentik
Add model card
0b99ae5
verified
2 days ago
.gitattributes
Safe
1.57 kB
Add MLX quantized model weights
3 days ago
README.md
Safe
4.19 kB
Add model card
2 days ago
__init__.py
Safe
0 Bytes
Add MLX quantized model weights
3 days ago
chat_template.jinja
Safe
10.5 kB
Add MLX quantized model weights
3 days ago
config.json
Safe
1.6 kB
Add MLX quantized model weights
3 days ago
configuration_nemotron_h.py
Safe
12.1 kB
Add MLX quantized model weights
3 days ago
generation_config.json
Safe
188 Bytes
Add MLX quantized model weights
3 days ago
model.safetensors
Safe
1.24 GB
xet
Add MLX quantized model weights
3 days ago
model.safetensors.index.json
Safe
31.3 kB
Add MLX quantized model weights
3 days ago
modeling_nemotron_h.py
Safe
78.6 kB
Add MLX quantized model weights
3 days ago
nano_v3_reasoning_parser.py
Safe
798 Bytes
Add MLX quantized model weights
3 days ago
tokenizer.json
Safe
17.1 MB
xet
Add MLX quantized model weights
3 days ago
tokenizer_config.json
Safe
372 Bytes
Add MLX quantized model weights
3 days ago