Update README.md
Browse files
README.md
CHANGED
|
@@ -9,21 +9,19 @@ tags:
|
|
| 9 |
---
|
| 10 |
# miqurelian-120b
|
| 11 |
|
| 12 |
-
This is
|
| 13 |
|
| 14 |
## Model Details
|
| 15 |
|
| 16 |
- Max Context: 32768 tokens
|
| 17 |
- Layers: 140
|
| 18 |
|
| 19 |
-
### Prompt template
|
| 20 |
|
| 21 |
```
|
| 22 |
<s>[INST] {prompt} [/INST]
|
| 23 |
```
|
| 24 |
|
| 25 |
-
## Merge Details
|
| 26 |
-
|
| 27 |
### Merge Method
|
| 28 |
|
| 29 |
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
|
|
|
|
| 9 |
---
|
| 10 |
# miqurelian-120b
|
| 11 |
|
| 12 |
+
This is a 120b merge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16), a creative writing model, using [mergekit](https://github.com/cg123/mergekit). It performs approximtely SOTA for long-context creative writing tasks that require strong semantic coherence.
|
| 13 |
|
| 14 |
## Model Details
|
| 15 |
|
| 16 |
- Max Context: 32768 tokens
|
| 17 |
- Layers: 140
|
| 18 |
|
| 19 |
+
### Prompt template
|
| 20 |
|
| 21 |
```
|
| 22 |
<s>[INST] {prompt} [/INST]
|
| 23 |
```
|
| 24 |
|
|
|
|
|
|
|
| 25 |
### Merge Method
|
| 26 |
|
| 27 |
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
|