Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
DeepSeek-R1-Distill-Qwen-32B-4bit
like
45
Follow
MLX Community
8.44k
Text Generation
Transformers
Safetensors
MLX
qwen2
conversational
text-generation-inference
4-bit precision
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
main
DeepSeek-R1-Distill-Qwen-32B-4bit
18.4 GB
2 contributors
History:
3 commits
awni
Update tokenizer_config.json
4e0d384
verified
11 months ago
.gitattributes
1.57 kB
Upload folder using huggingface_hub (#1)
about 1 year ago
README.md
967 Bytes
Upload folder using huggingface_hub (#1)
about 1 year ago
config.json
868 Bytes
Upload folder using huggingface_hub (#1)
about 1 year ago
model-00001-of-00004.safetensors
5.37 GB
xet
Upload folder using huggingface_hub (#1)
about 1 year ago
model-00002-of-00004.safetensors
5.34 GB
xet
Upload folder using huggingface_hub (#1)
about 1 year ago
model-00003-of-00004.safetensors
5.37 GB
xet
Upload folder using huggingface_hub (#1)
about 1 year ago
model-00004-of-00004.safetensors
2.36 GB
xet
Upload folder using huggingface_hub (#1)
about 1 year ago
model.safetensors.index.json
143 kB
Upload folder using huggingface_hub (#1)
about 1 year ago
special_tokens_map.json
485 Bytes
Upload folder using huggingface_hub (#1)
about 1 year ago
tokenizer.json
11.4 MB
xet
Upload folder using huggingface_hub (#1)
about 1 year ago
tokenizer_config.json
6.76 kB
Update tokenizer_config.json
11 months ago