File size: 2,507 Bytes
57fd521
 
 
 
 
 
 
 
29c144d
 
 
 
57fd521
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f29930
 
0937087
ed6a712
 
 
 
1010758
1f29930
 
f9d57c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed6a712
f9d57c1
 
 
 
 
 
 
 
 
 
1f29930
f9d57c1
 
 
1f29930
57fd521
 
 
 
 
 
 
 
 
 
 
 
 
 
29c144d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
base_model:
- amazingvince/Not-WizardLM-2-7B
- CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2
library_name: transformers
tags:
- mergekit
- merge
license: mit
language:
- ko
- en
---
# output

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.

### Models Merged

The following models were included in the merge:
* [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B)
* [CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2](https://huggingface.co/CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2)


### Score
```
openai/gpt-4 : 0.6158
gemini-pro: 0.515
OpenCarrot-Mix-7B (this) : 0.4425
mistralai/Mixtral-8x7B-Instruct-v0.1 : 0.4304
openai/gpt-3.5-turbo : 0.4217
```

| ํ‰๊ฐ€ ์ง€ํ‘œ | ์ ์ˆ˜     |
|--------------|---------|
| AVG_llm_kr_eval   | 0.4425  |
| EL            | 0.0522  |
| FA            | 0.0865  |
| NLI           | 0.6700  |
| QA            | 0.5100  |
| RC            | 0.8937  |
| klue_ner_set_f1| 0.0944  |
| klue_re_exact_match   | 0.0100  |
| kmmlu_preview_exact_match | 0.4000  |
| kobest_copa_exact_match    | 0.8200  |
| kobest_hs_exact_match      | 0.5500  |
| kobest_sn_exact_match      | 0.9800  |
| kobest_wic_exact_match     | 0.6200  |
| korea_cg_bleu    | 0.0865  |
| kornli_exact_match   | 0.6400  |
| korsts_pearson | 0.8547  |
| korsts_spearman| 0.8464  |

LogicKor

| ์นดํ…Œ๊ณ ๋ฆฌ | ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  | ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท  |
|----------|------------------|-------------------|
| ์ฝ”๋”ฉ(Coding)  | 7.71              | 7.71               |
| ์ˆ˜ํ•™(Math)   | 5.57              | 3.86               |
| ์ดํ•ด(Understanding)    | 6.86              | 8.14               |
| ์ถ”๋ก (Reasoning)      | 8.14              | 6.43               |
| ๊ธ€์“ฐ๊ธฐ(Writing)       | 8.71              | 6.86               |
| ๋ฌธ๋ฒ•(Grammar)         | 5.29              | 2.29               |

| ์นดํ…Œ๊ณ ๋ฆฌ   | ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  | ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท  |
|------------|------------------|-------------------|
| ์ „์ฒด ์‹ฑ๊ธ€ | 7.05             | 5.88              |

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: amazingvince/Not-WizardLM-2-7B
    parameters:
      weight: 1.0
  - model: CarrotAI/OpenCarrot-Mistral-7B-Instruct-v0.2
    parameters:
      weight: 0.5
merge_method: linear
dtype: float16
```