File size: 1,233 Bytes
6809b79
 
 
 
 
457bc85
6809b79
5dde8f5
bfb8dd8
5eecbde
a717cc5
6809b79
 
 
 
4189af4
79bd65f
 
d251d4f
802549c
 
1ee1960
aa06552
4353435
 
 
 
 
 
 
 
 
 
 
aa06552
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
![1718719796324.png](https://cdn-uploads.huggingface.co/production/uploads/65f158693196560d34495d54/6SuBtoTMP5Svgl0fmT9vt.png)
***
### L3-Inca-8B-v0.5
[L3-Inca-8B-v0.5](https://huggingface.co/Ppoyaa/L3-Inca-8B-v0.5) is a merge of the following models:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [Gryphe/Pantheon-RP-1.0-8b-Llama-3](https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3)
* [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02)
* [grimjim/Llama-3-Luminurse-v0.2-OAS-8B](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B)

using [NurtureAI/Meta-Llama-3-8B-Instruct-32k](https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-32k) as the base.

- Made with RP/ERP in mind.
- Supports a context length of 32k.
- Strong at instruction following.
- Fully uncensored.
***
Recommend Preset Settings:
```
Top K: 40
Top P: 0.95
Min P: 0.075
Rep Pen: 1.05
Rep Pen Range: 2048
Frequency Pen: 0.50
Presence Pen: 0.15
```
***
### GGUF

[mradermacher](https://huggingface.co/mradermacher): [L3-Inca-8B-v0.5-GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF)