Update README.md
Browse files
README.md
CHANGED
|
@@ -11,13 +11,16 @@ license: llama3
|
|
| 11 |
|
| 12 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
| 13 |
|
| 14 |
-
The goal was to create a Llama 3 13B model, but I would consider this a base model to be further finetuned on.
|
| 15 |
Surprisingly, it is usable for chat and storywriting with Llama 3 Instruct template, though it does occasionally have some grammatical quirks like L3-120B.
|
| 16 |
|
| 17 |
Logical ability (programming, math, science, etc.) has been deteriorated by the merge process.
|
| 18 |
|
| 19 |
Use **<u>no repetition penalty or <1.05</u>** or it might go a bit haywire, other than that, it is suitable for writing use. I have not tested it against L3 8B in that regard.
|
| 20 |
|
|
|
|
|
|
|
|
|
|
| 21 |
## Merge Details
|
| 22 |
### Merge Method
|
| 23 |
|
|
@@ -54,4 +57,8 @@ slices:
|
|
| 54 |
- sources:
|
| 55 |
- layer_range: [22, 32]
|
| 56 |
model: meta-llama/Meta-Llama-3-8B-Instruct
|
| 57 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
| 13 |
|
| 14 |
+
The goal was to create a Llama 3 13B model to have a "mid" sized model which Meta has released in the past, but I would consider this a base model to be further finetuned on.
|
| 15 |
Surprisingly, it is usable for chat and storywriting with Llama 3 Instruct template, though it does occasionally have some grammatical quirks like L3-120B.
|
| 16 |
|
| 17 |
Logical ability (programming, math, science, etc.) has been deteriorated by the merge process.
|
| 18 |
|
| 19 |
Use **<u>no repetition penalty or <1.05</u>** or it might go a bit haywire, other than that, it is suitable for writing use. I have not tested it against L3 8B in that regard.
|
| 20 |
|
| 21 |
+
## Finetuned Version
|
| 22 |
+
A finetuned version of this model can be found at [elinas/Llama-3-13B-Instruct-ft](https://huggingface.co/elinas/Llama-3-13B-Instruct-ft) which seems to improve performance.
|
| 23 |
+
|
| 24 |
## Merge Details
|
| 25 |
### Merge Method
|
| 26 |
|
|
|
|
| 57 |
- sources:
|
| 58 |
- layer_range: [22, 32]
|
| 59 |
model: meta-llama/Meta-Llama-3-8B-Instruct
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## Model Evaluation
|
| 63 |
+
|
| 64 |
+
TBD - submitted
|