changed example of custom matrix to be a good example for preserving multilingual capabilities, coding/reasoning skills, and a bit of decensoring
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ training data. Size also matters: the bigger the model (eg: 70b vs 13b) and the
|
|
| 13 |
the bigger the source file needs to be to make an impact. Multiple input files can be combined if needed;
|
| 14 |
for example:
|
| 15 |
```
|
| 16 |
-
cat
|
| 17 |
```
|
| 18 |
Note on **context size** when generating the matrix: in general, a small context size such as 512 is recommended, and community
|
| 19 |
tests have shown it usually performs than a larger one such as 4096. However, I would argue this is is highly dependent on the
|
|
|
|
| 13 |
the bigger the source file needs to be to make an impact. Multiple input files can be combined if needed;
|
| 14 |
for example:
|
| 15 |
```
|
| 16 |
+
cat multilingual.txt code.txt badwords_multilingual.txt > custom_multilingual.matrix
|
| 17 |
```
|
| 18 |
Note on **context size** when generating the matrix: in general, a small context size such as 512 is recommended, and community
|
| 19 |
tests have shown it usually performs than a larger one such as 4096. However, I would argue this is is highly dependent on the
|