Update README.md
Browse files
README.md
CHANGED
|
@@ -9,6 +9,7 @@ tags:
|
|
| 9 |
# The-Omega-Directive-12B-v1.0
|
| 10 |
|
| 11 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
| 12 |
|
| 13 |
## Merge Details
|
| 14 |
### Merge Method
|
|
|
|
| 9 |
# The-Omega-Directive-12B-v1.0
|
| 10 |
|
| 11 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
| 12 |
+
This model will be very repetitive and not function well. After removing the layers, I found the model a bit unsuable. However I am currently crafting a small RP dataset based off of synthetic data from Claude 3.7 and Haiku 3.5 to retrain the smaller models.
|
| 13 |
|
| 14 |
## Merge Details
|
| 15 |
### Merge Method
|