Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
BlueMoonlight
/
DeepSeek-R1-Distill-Qwen-14B-mlx-4Bit
like
0
Text Generation
Transformers
Safetensors
MLX
qwen2
conversational
text-generation-inference
4-bit precision
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
DeepSeek-R1-Distill-Qwen-14B-mlx-4Bit
8.32 GB
1 contributor
History:
11 commits
BlueMoonlight
Upload README.md with huggingface_hub
1aeabea
verified
about 2 months ago
.gitattributes
Safe
1.57 kB
Upload tokenizer.json with huggingface_hub
about 2 months ago
README.md
996 Bytes
Upload README.md with huggingface_hub
about 2 months ago
chat_template.jinja
Safe
2.25 kB
Upload chat_template.jinja with huggingface_hub
about 2 months ago
config.json
920 Bytes
Upload config.json with huggingface_hub
about 2 months ago
generation_config.json
181 Bytes
Upload generation_config.json with huggingface_hub
about 2 months ago
model-00001-of-00002.safetensors
5.35 GB
xet
Upload model-00001-of-00002.safetensors with huggingface_hub
about 2 months ago
model-00002-of-00002.safetensors
2.96 GB
xet
Upload model-00002-of-00002.safetensors with huggingface_hub
about 2 months ago
model.safetensors.index.json
Safe
107 kB
Upload model.safetensors.index.json with huggingface_hub
about 2 months ago
special_tokens_map.json
Safe
485 Bytes
Upload special_tokens_map.json with huggingface_hub
about 2 months ago
tokenizer.json
Safe
11.4 MB
xet
Upload tokenizer.json with huggingface_hub
about 2 months ago
tokenizer_config.json
Safe
4.49 kB
Upload tokenizer_config.json with huggingface_hub
about 2 months ago