File size: 8,174 Bytes
6da4dc9
 
 
 
 
 
 
 
 
baf65a0
 
 
 
 
3c0d26b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6da4dc9
 
b11c053
 
 
 
3ade048
 
 
 
 
 
 
 
6d310d8
 
3ade048
e96165d
 
 
3ade048
 
 
6d310d8
 
 
 
 
 
 
aabd58c
 
 
 
6d310d8
aabd58c
 
 
6da4dc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c0d26b
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- abideen/NexoNimbus-7B
- fblgit/UNA-TheBeagle-7b-v1
- argilla/distilabeled-Marcoro14-7B-slerp
base_model:
- udkai/Turdus
- abideen/NexoNimbus-7B
- fblgit/UNA-TheBeagle-7b-v1
- argilla/distilabeled-Marcoro14-7B-slerp
model-index:
- name: MergeTrix-7B
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 72.27
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CultriX/MergeTrix-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 87.84
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CultriX/MergeTrix-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 64.88
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CultriX/MergeTrix-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 66.27
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CultriX/MergeTrix-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 83.5
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CultriX/MergeTrix-7B
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 71.19
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=CultriX/MergeTrix-7B
      name: Open LLM Leaderboard
---

# EDIT:
Always check my space for the latest benchmark results for my models!
* https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard
  
# IMPORTANT NOTE | READ ME! #
This model uses udkai/Turdus which may produce inaccurate results for the Winogrande evaluation scores.
The following is a quote directly taken from that models page:
- "A less contaminated version of udkai/Garrulus and the second model to be discussed in the paper Subtle DPO-Contamination with modified Winogrande increases TruthfulQA, Hellaswag & ARC."
- "Subtle DPO-Contamination with modified Winogrande causes the average accuracy of all 5-non Winogrande metrics (e.g. including also MMLU and GSM8K) to be 0.2% higher than the underlying model."

In my understanding the Winogrande scores are only slightly influenced by the DPO-Contamination, that has the "side-effect" of increasing the scores on the other benchmarks.
Since the  effect on the Winogrande scores was subtle in the udkai/Turdus benchmarking results, and this model combines it with other models (probably making this effect even less pronounced), 
I still believe that this model can be of value to the community as it's overall performance is quite  impressive. 
However I do not want to mislead anybody or produce any unfair scores, hence this note! The full training configuration is also fully transparant and can be found below.

I Hope this model will prove useful to somebody. There's GGUF versions available here for inference: https://huggingface.co/CultriX/MergeTrix-7B-GGUF.
I personally tested them and found them to produce very pleasing results.

Kind regards,
CultriX

# PERSONAL DISCLAIMER
(This is probably a good moment to point out that I'm an amateur doing this for fun and am by no means an IT professional or data scientist.
Therefore my understanding of these topics might be incomplete, missing or simply completely wrong in turn causing me to make inaccurate claims.
If you notice that's the case I invite you to notify me of my mistakes so that I can rectify any potential inaccuracies as soon as possible. Thanks for understanding!)
I Hope this model will prove useful to somebody.
There's GGUF versions available here for inference: https://huggingface.co/CultriX/MergeTrix-7B-GGUF

# Shoutout
Once again, a major thank you and shoutout to @mlabonne for his amazing article that I used to produce this result which can be found here: https://towardsdatascience.com/merge-large-language-models-with-mergekit-2118fb392b54
My other model, CultriX/MistralTrix-v1, was based on another great article from the same guy, which can be found here: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac 
(I hope he doesn't mind me using his own articles to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)
es to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)

# MODEL INFORMATION:
# NAME: MergeTrix-7B

MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
* [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
* [argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)

## 🧩 Configuration

```yaml
models:
  - model: udkai/Turdus
    # No parameters necessary for base model
  - model: abideen/NexoNimbus-7B
    parameters:
      density: 0.53
      weight: 0.4
  - model: fblgit/UNA-TheBeagle-7b-v1
    parameters:
      density: 0.53
      weight: 0.3
  - model: argilla/distilabeled-Marcoro14-7B-slerp
    parameters:
      density: 0.53
      weight: 0.3
merge_method: dare_ties
base_model: udkai/Turdus
parameters:
  int8_mask: true
dtype: bfloat16
```

## 💻 Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "CultriX/MergeTrix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CultriX__MergeTrix-7B)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |74.33|
|AI2 Reasoning Challenge (25-Shot)|72.27|
|HellaSwag (10-Shot)              |87.84|
|MMLU (5-Shot)                    |64.88|
|TruthfulQA (0-shot)              |66.27|
|Winogrande (5-shot)              |83.50|
|GSM8k (5-shot)                   |71.19|