This repo is the GGUF format of yang-z/CodeV-DS-6.7B.
The GGUF file is quantized to FP16.
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support