Add 8.0 link
Browse files
README.md
CHANGED
|
@@ -25,6 +25,8 @@ Conversion was done using the default calibration dataset.
|
|
| 25 |
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
|
| 26 |
|
| 27 |
Original model: https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B
|
|
|
|
|
|
|
| 28 |
|
| 29 |
## Download instructions
|
| 30 |
|
|
|
|
| 25 |
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
|
| 26 |
|
| 27 |
Original model: https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B
|
| 28 |
+
|
| 29 |
+
<a href="https://huggingface.co/bartowski/CodeNinja-1.0-OpenChat-7B-exl2/tree/8_0">8.0 bits per weight</a>
|
| 30 |
|
| 31 |
## Download instructions
|
| 32 |
|