File size: 1,786 Bytes
d285b2f
 
 
 
 
fc7fdaa
d285b2f
 
fcadf80
d285b2f
9031a09
9adc533
 
 
 
d285b2f
 
6462157
d285b2f
e8802d0
9031a09
 
 
37e626a
9031a09
fc7fdaa
fcadf80
d285b2f
 
6c54d11
 
 
 
d285b2f
 
 
 
 
fcadf80
d285b2f
 
 
 
fcadf80
d285b2f
 
 
 
 
 
 
 
 
54f9486
8e3f725
 
 
d285b2f
 
 
 
2ee2393
6c54d11
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
tags:
- merge
- mergekit
- lazymergekit
- datatab/Yugo45-GPT
- FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
base_model:
- datatab/YugoGPT-Alpaca-v1
- FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
license: cc-by-4.0
datasets:
- datatab/alpaca-cleaned-serbian-full
language:
- sr
---

# Yugo45-GPT *(7b)

This **Yugo45-GPT (7b)** model has been fine-tuned on the Alpaca dataset using the **[gordicaleksa/YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT/)** as the zero ground base model.

- **Finetune performed by**: datatab
- **License**: CC-BY-4.0
- **Original model author**: [gordicaleksa/YugoGPT](https://huggingface.co/gordicaleksa/YugoGPT/)

Yugo45-GPT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [datatab/YugoGPT-Alpaca-v1](https://huggingface.co/datatab/YugoGPT-Alpaca-v1)
* [FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin](https://huggingface.co/FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin)

## ๐Ÿ“Œ Note

Special thanks for idea [**Stopwolf**](https://huggingface.co/Stopwolf) and this **X** post  [**@TheStopwolf**](https://twitter.com/TheStopwolf/status/1761350502212599890)

## ๐Ÿงฉ Configuration

```yaml
slices:
  - sources:
      - model: datatab/YugoGPT-Alpaca-v1
        layer_range: [0, 32]
      - model: FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
        layer_range: [0, 32]
merge_method: slerp
base_model: datatab/YugoGPT-Alpaca-v1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
```
## ๐Ÿ‹๐Ÿผ Benchmarks
```python
# TBD
```

## ๐Ÿ’ป Usage

```python
# TBD
```