Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
mlx-community
/
DeepSeek-V4-Flash-4bit
like
10
Follow
MLX Community
12.2k
Text Generation
MLX
Safetensors
English
deepseek_v4
conversational
4-bit precision
Model card
Files
Files and versions
xet
Community
2
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Sort: Recently created
Model type deepseek_v4 not supported.
1
#2 opened 11 days ago by
linxin111
Is it possible to convert to a 2-bit quantized version?
#1 opened 12 days ago by
hehua2008