How to use from the
Use from the
MLC-LLM library
# No code snippets available yet for this library.

# To use this model, check the repository files and the library's documentation.

# Want to help? PRs adding snippets are welcome at:
# https://github.com/huggingface/huggingface.js

4-bit GPTQ quantized version of DeepCoder-14B-Preview for use with the Private LLM app.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for numen-tech/DeepCoder-14B-Preview-GPTQ-Int4