Update README.md
Browse files
README.md
CHANGED
|
@@ -9,10 +9,9 @@ tags:
|
|
| 9 |
license: llama3.1
|
| 10 |
---
|
| 11 |
|
| 12 |
-
# **L3
|
| 13 |
-
**"Combining the Strengths of Two Llama3.3 Models"**
|
| 14 |
|
| 15 |
-

|
| 25 |
-
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
### **Configuration**
|
| 28 |
```yaml
|
|
@@ -46,29 +62,6 @@ parameters:
|
|
| 46 |
- value: 0.5
|
| 47 |
```
|
| 48 |
|
| 49 |
-
---
|
| 50 |
-
|
| 51 |
-
## **Recommended Settings**
|
| 52 |
-
```python
|
| 53 |
-
temperature: 1.3
|
| 54 |
-
min_p: 0.08
|
| 55 |
-
rep_pen : 1.1
|
| 56 |
-
top_k : 50
|
| 57 |
-
max_tokens/context : 8196
|
| 58 |
-
```
|
| 59 |
-
|
| 60 |
-
**Hardware Requirements:**
|
| 61 |
-
- Minimum: 16GB VRAM (Q4_K_M quantization recommended)
|
| 62 |
-
- Optimal: 32GB VRAM for full bfloat16 precision
|
| 63 |
-
|
| 64 |
-
---
|
| 65 |
-
|
| 66 |
-
## **Performance Highlights**
|
| 67 |
-
- **Balanced Capabilities**: Combines Stheno's technical expertise with Lunaris' conversational fluency
|
| 68 |
-
- **Versatile Use**: Suitable for coding, content creation, and general-purpose tasks
|
| 69 |
-
|
| 70 |
-
---
|
| 71 |
-
|
| 72 |
**This model was created using [mergekit](https://github.com/cg123/mergekit).**
|
| 73 |
**License: Llama3 (see LICENSE.md for details)**
|
| 74 |
|
|
|
|
| 9 |
license: llama3.1
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# **Luminatium-L3-8b : Stheno + Lunaris Merge**
|
|
|
|
| 13 |
|
| 14 |
+
![image/png]
|
| 15 |
|
| 16 |
---
|
| 17 |
|
|
|
|
| 21 |
|
| 22 |
### **Models Merged**
|
| 23 |
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
| 24 |
+
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
|
| 25 |
+
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
## **Recommended Settings**
|
| 29 |
+
```python
|
| 30 |
+
temperature: 1.3
|
| 31 |
+
min_p: 0.08
|
| 32 |
+
rep_pen : 1.1
|
| 33 |
+
top_k : 50
|
| 34 |
+
max_tokens/context : 8196
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
**Hardware Requirements:**
|
| 38 |
+
- Minimum: 16GB VRAM (Q4_K_M quantization recommended)
|
| 39 |
+
- Optimal: 32GB VRAM for full bfloat16 precision
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
|
| 43 |
### **Configuration**
|
| 44 |
```yaml
|
|
|
|
| 62 |
- value: 0.5
|
| 63 |
```
|
| 64 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
**This model was created using [mergekit](https://github.com/cg123/mergekit).**
|
| 66 |
**License: Llama3 (see LICENSE.md for details)**
|
| 67 |
|