|
|
--- |
|
|
license: other |
|
|
license_name: deepseek |
|
|
license_link: >- |
|
|
https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct/blob/main/LICENSE |
|
|
pipeline_tag: text-generation |
|
|
library_name: gguf |
|
|
--- |
|
|
GGUF importance matrix (imatrix) quants for https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B |
|
|
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). |
|
|
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well. |
|
|
|
|
|
| Layers | Context | [Template](https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B#inference-string-format) | |
|
|
| --- | --- | --- | |
|
|
| <pre>62</pre> | <pre>16384</pre> | <pre>\<s\>system<br>{instructions}<br>\<s\>human<br>{prompt}<br>\<s\>bot<br>{response}<|end▁of▁sentence|></pre> | |