Update README.md
Browse files
README.md
CHANGED
|
@@ -48,36 +48,36 @@ https://huggingface.co/datasets/wikitext
|
|
| 48 |
or "good articles".\
|
| 49 |
https://huggingface.co/datasets/asi/wikitext_fr
|
| 50 |
|
| 51 |
-
|
|
|
|
| 52 |
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
**c4**\
|
| 56 |
-
|
| 57 |
-
**code**\
|
| 58 |
Programming
|
| 59 |
|
| 60 |
-
**multilingual
|
| 61 |
English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew,
|
| 62 |
Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish.
|
| 63 |
|
| 64 |
-
**technical
|
| 65 |
Technical writing.
|
| 66 |
|
| 67 |
-
**tiny
|
| 68 |
-
Very short stories.
|
| 69 |
|
| 70 |
-
**wiki
|
| 71 |
-
Wikipedia dump.
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
-
## How to quantize
|
| 74 |
|
| 75 |
-
1. Get one of the input files collected here, or
|
| 76 |
2. Convert or download the model you want to quantise, in fp16 GGUF format.
|
| 77 |
3. Generate an imatrix file specific to the model you want to quantise
|
| 78 |
```
|
| 79 |
cd <llama.cpp directory>
|
| 80 |
-
./imatrix -m <model_path>/ggml-model-f16.gguf -f <
|
| 81 |
|
| 82 |
# -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
|
| 83 |
# -t 12 : number of threads (should probably match no of cpu)
|
|
@@ -86,9 +86,9 @@ cd <llama.cpp directory>
|
|
| 86 |
# --chunks 100 (recommended)
|
| 87 |
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
|
| 88 |
```
|
| 89 |
-
4. Use the generated
|
| 90 |
```
|
| 91 |
-
./quantize <
|
| 92 |
```
|
| 93 |
-
Note: normal quantisation also benefits from using a matrix file. It also seem that a
|
| 94 |
better for higher quantisation.
|
|
|
|
| 48 |
or "good articles".\
|
| 49 |
https://huggingface.co/datasets/asi/wikitext_fr
|
| 50 |
|
| 51 |
+
**c4** (exllamav2)\
|
| 52 |
+
Constructed from news articles?
|
| 53 |
|
| 54 |
+
**code** (exllamav2)\
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
Programming
|
| 56 |
|
| 57 |
+
**multilingual** (exllamav2)\
|
| 58 |
English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew,
|
| 59 |
Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish.
|
| 60 |
|
| 61 |
+
**technical** (exllamav2)\
|
| 62 |
Technical writing.
|
| 63 |
|
| 64 |
+
**tiny** (exllamav2)\
|
| 65 |
+
Very short stories. Be mindful of the prevalence of _"Once upon a time"_ and _"<|end_of_text|>"_.
|
| 66 |
|
| 67 |
+
**wiki** (exllamav2)\
|
| 68 |
+
Small Wikipedia dump. Unclean, contains many unwanted tags.
|
| 69 |
+
|
| 70 |
+
exllamav2 calibration data taken from:\
|
| 71 |
+
https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data
|
| 72 |
|
| 73 |
+
## How to quantize using an imatrix, with llama.cpp
|
| 74 |
|
| 75 |
+
1. Get one of the input files collected here, or elsewhere.
|
| 76 |
2. Convert or download the model you want to quantise, in fp16 GGUF format.
|
| 77 |
3. Generate an imatrix file specific to the model you want to quantise
|
| 78 |
```
|
| 79 |
cd <llama.cpp directory>
|
| 80 |
+
./imatrix -m <model_path>/ggml-model-f16.gguf -f <plain_text_matrix_file> -o <output.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512
|
| 81 |
|
| 82 |
# -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
|
| 83 |
# -t 12 : number of threads (should probably match no of cpu)
|
|
|
|
| 86 |
# --chunks 100 (recommended)
|
| 87 |
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
|
| 88 |
```
|
| 89 |
+
4. Use the generated matrix file to quantise the model
|
| 90 |
```
|
| 91 |
+
./quantize --matrix <output.matrix> <model_path>/ggml-model-f16.gguf <quantisation_level, eg:IQ4_XS>
|
| 92 |
```
|
| 93 |
+
Note: normal quantisation also benefits from using a matrix file. It also seem that a bigger input matrix is
|
| 94 |
better for higher quantisation.
|