Update README.md
Browse files
README.md
CHANGED
|
@@ -6,9 +6,17 @@ tags:
|
|
| 6 |
- merge
|
| 7 |
|
| 8 |
---
|
| 9 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
| 12 |
|
| 13 |
## Merge Details
|
| 14 |
### Merge Method
|
|
|
|
| 6 |
- merge
|
| 7 |
|
| 8 |
---
|
| 9 |
+
# Magic-Dolphin-7b
|
| 10 |
+
Magic-Dolphin-7b
|
| 11 |
+
|
| 12 |
+
A linear merge of dolphin-2.6-mistral-7b-dpo-laser, merlinite-7b, and Hyperion-1.5-Mistral-7B. These three models showed excellent acumen in technical topics so I wanted to see how they would behave together in a merge. Several different ratios were tested before this release, in the end a higher weighting for merlinite-7b helped smooth out some edges. This model is a test of how LAB tuning is impacted by merges with models leveraging DPO.
|
| 13 |
+
|
| 14 |
+
This was my first experiment with merging models so any feedback is greatly appreciated.
|
| 15 |
+
|
| 16 |
+
Uses Alpaca template.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
|
|
|
|
| 20 |
|
| 21 |
## Merge Details
|
| 22 |
### Merge Method
|