DeepSeek Base Model in GGUF Format
This is the base DeepSeek 1.5B model converted to GGUF format for efficient inference.
Model Details
- Base model: DeepSeek 1.5B
- Quantization: Q8_0
- Format: GGUF
Usage
This model can be used with llama.cpp and other GGUF-compatible inference engines.
Original Model
This model was converted from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B.