|
|
--- |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- mistral |
|
|
- merge |
|
|
license: cc-by-4.0 |
|
|
--- |
|
|
# Model Card for Sina-Loki-7b-Merge |
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
Part of a series of experimental DARE merges. |
|
|
|
|
|
.yaml file for mergekit |
|
|
```.yaml: |
|
|
models: |
|
|
- model: RatanRohith/SRBOSGPT-7B-slerp |
|
|
# no parameters necessary for base model |
|
|
- model: rishiraj/smol-7b #75 |
|
|
parameters: |
|
|
weight: 0.2 |
|
|
density: 0.41 |
|
|
- model: SanjiWatsuki/openchat-3.5-1210-starling-slerp #125 |
|
|
parameters: |
|
|
weight: 0.33 |
|
|
density: 0.54 |
|
|
- model: Azazelle/Dumb-Maidlet #200 |
|
|
parameters: |
|
|
weight: 0.53 |
|
|
density: 0.71 |
|
|
merge_method: dare_ties |
|
|
base_model: RatanRohith/SRBOSGPT-7B-slerp |
|
|
parameters: |
|
|
int8_mask: true |
|
|
dtype: bfloat16 |
|
|
``` |