Marcoro14-7B-slerp

Marcoro14-7B-slerp is a merge of the following models using mergekit:

🧩 Configuration

```yaml slices:

  • sources:
    • model: Rimyy/Llama-2-7b-chat-finetuneGSMdata layer_range: [0, 10]
    • model: Rimyy/Gemma-2b-finetuneGSMdata5ep layer_range: [0, 10]

merge_method: slerp base_model: Rimyy/Llama-2-7b-chat-finetuneGSMdata parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support