Update README.md
Browse files
README.md
CHANGED
|
@@ -3,6 +3,8 @@ license: apache-2.0
|
|
| 3 |
pipeline_tag: text-generation
|
| 4 |
library_name: gguf
|
| 5 |
---
|
|
|
|
|
|
|
| 6 |
GGUF importance matrix (imatrix) quants for https://huggingface.co/m-a-p/OpenCodeInterpreter-CL-70B
|
| 7 |
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
|
| 8 |
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well.
|
|
|
|
| 3 |
pipeline_tag: text-generation
|
| 4 |
library_name: gguf
|
| 5 |
---
|
| 6 |
+
<u>**NOTE**</u>: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`).
|
| 7 |
+
|
| 8 |
GGUF importance matrix (imatrix) quants for https://huggingface.co/m-a-p/OpenCodeInterpreter-CL-70B
|
| 9 |
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
|
| 10 |
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well.
|