4-bit GPTQ quantized version of DeepCoder-14B-Preview for use with the Private LLM app.
- Downloads last month
- -
Model tree for numen-tech/DeepCoder-14B-Preview-GPTQ-Int4
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B Finetuned
agentica-org/DeepCoder-14B-Preview
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js