File size: 2,895 Bytes
52eb497
 
 
 
 
 
 
 
 
3d92bd5
52eb497
3d92bd5
 
 
52eb497
3d92bd5
52eb497
 
3d92bd5
52eb497
3d92bd5
 
52eb497
3d92bd5
52eb497
3d92bd5
52eb497
3d92bd5
 
 
 
52eb497
3d92bd5
 
52eb497
3d92bd5
52eb497
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d92bd5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
base_model:
- Vortex5/MS3.2-24B-Fiery-Lynx
- ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0
- Vortex5/MS3.2-24B-Chaos-Skies
library_name: transformers
tags:
- mergekit
- merge
- roleplay
---
![ComfyUI_00171_](https://cdn-uploads.huggingface.co/production/uploads/6669a3a617b838fda45637b8/sFwQYcb-m-OyYN9crZnZ1.png)

# πŸ”₯ **MS3.2-24B-Solar-Skies**

> Bright minds under boundless skies β€” where every conversation becomes a sunrise of imagination


## 🧬 Overview

**MS3.2-24B-Solar-Skies**  merge of pre-trained language models created using [MergeKit](https://github.com/arcee-ai/mergekit).  
It draws upon the **intellectual density** of *The Omega Directive*, the **expressive prose** of *Fiery Lynx*, and the **measured balance** of *Chaos Skies*.

---

## βš™οΈ **Merge Method β€” [Multi-SLERP](https://goddard.blog/posts/multislerp-wow-what-a-cool-idea)**

🧩 **Models:**
- 🧠 [ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0](https://huggingface.co/ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0)  
- πŸ”₯ [Vortex5/MS3.2-24B-Fiery-Lynx](https://huggingface.co/Vortex5/MS3.2-24B-Fiery-Lynx)  
- 🌌 [Vortex5/MS3.2-24B-Chaos-Skies](https://huggingface.co/Vortex5/MS3.2-24B-Chaos-Skies)  

<details>
<summary><b>Configuration</b></summary>

```yaml
models:
  - model: ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.0
    parameters:
      weight:
        - filter: self_attn
          value: [0.20, 0.35, 0.55, 0.75, 1.00, 0.95, 0.80, 0.50]
        - filter: norm
          value: 0.45
        - value: 0.33
  - model: Vortex5/MS3.2-24B-Fiery-Lynx
    parameters:
      weight:
      - filter: lm_head
        value: 0.34
      - filter: mlp
        value: [0.20, 0.30, 0.45, 0.60, 0.65, 0.60, 0.45, 0.30]
      - value: 0.25
  - model: Vortex5/MS3.2-24B-Chaos-Skies
    parameters:
      weight:
      - filter: self_attn
        value: [0.25, 0.35, 0.45, 0.55, 0.60, 0.65, 0.65, 0.60]
      - filter: mlp
        value: 0.2
      - value: 0.33
merge_method: multislerp
dtype: bfloat16
parameters:
  normalize: true
tokenizer:
  source: Vortex5/MS3.2-24B-Chaos-Skies
```
</details>

## 🎭 Intended Use
| Category | Description |
|-----------|--------------|
| 🧘 **Reflective Dialogue** | Ideal for introspective or philosophical discussions, exploring abstract and emotional topics. |
| πŸ–‹οΈ **Creative Writing** | Excels at expressive prose, narrative storytelling, and immersive worldbuilding. |
| 🧠 **Analytical Reasoning** | Balances logic and creativity for insightful, stylistically nuanced explanations. |
| πŸ’ž **Character Roleplay** | Adapts fluidly to emotional, character-driven interactions and narrative depth. |

---

# πŸŒ’ Acknowledgements

- βš™οΈ mradermacher β€” static / imatrix quantization

- πŸœ› DeathGodlike β€” EXL3 quants

- πŸ’« All original model authors and contributors whose work formed the foundation for this merge.