File size: 1,860 Bytes
acc9948
adb97da
acc9948
 
 
 
 
 
 
adb97da
421d581
acc9948
140d11e
adb97da
421d581
acc9948
421d581
 
 
 
adb97da
 
140d11e
421d581
140d11e
 
 
421d581
 
 
 
 
acc9948
421d581
 
 
 
 
 
 
 
 
 
 
 
 
a7762a5
421d581
a7762a5
 
421d581
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: apache-2.0
base_model:
- mlabonne/NeuralHermes-2.5-Mistral-7B
- teknium/OpenHermes-2.5-Mistral-7B
tags:
- merge
- mergekit
- lazymergekit
- mistral
- hermes
---
# SUONG-4 (7B Parameters)

This is a merge of pre-trained language models created using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing), combining the strengths of NeuralHermes and OpenHermes architectures through an optimized progressive fusion approach.

## About Me
I'm David Soeiro-Vuong, a third-year Computer Science student working as an apprentice at TW3 Partners, a company specialized in Generative AI. Passionate about artificial intelligence and language models optimization, I focus on creating efficient model merges that balance performance and capabilities.

🔗 [Connect with me on LinkedIn](https://www.linkedin.com/in/david-soeiro-vuong-a28b582ba/)

## Merge Details
### Merge Method
This model uses SLERP (Spherical Linear Interpolation) with a carefully tuned progressive fusion approach:
- Progressive attention layer fusion (0 to 1)
- Inverse MLP layer transition (1 to 0)
- Global fusion ratio of 0.45
- bfloat16 format for efficient memory usage

### Models Merged
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)

### Configuration
```yaml
slices:
  - sources:
      - model: mlabonne/NeuralHermes-2.5-Mistral-7B
        layer_range: [0, 32]
      - model: teknium/OpenHermes-2.5-Mistral-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.3, 0.6, 0.9, 1]  
    - filter: mlp
      value: [1, 0.7, 0.4, 0.1, 0]  
    - value: 0.45 
dtype: bfloat16