YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
DeepSeek Base Model in GGUF Format
This is the base DeepSeek 1.5B model converted to GGUF format for efficient inference.
Model Details
- Base model: DeepSeek 1.5B
- Quantization: Q8_0
- Format: GGUF
Usage
This model can be used with llama.cpp and other GGUF-compatible inference engines.
Original Model
This model was converted from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B.
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support