|
|
--- |
|
|
library_name: transformers |
|
|
tags: |
|
|
- mergekit |
|
|
- merge |
|
|
license: apache-2.0 |
|
|
--- |
|
|
 |
|
|
*** |
|
|
### L3-Inca-8B-v0.5 |
|
|
[L3-Inca-8B-v0.5](https://huggingface.co/Ppoyaa/L3-Inca-8B-v0.5) is a merge of the following models: |
|
|
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) |
|
|
* [Gryphe/Pantheon-RP-1.0-8b-Llama-3](https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3) |
|
|
* [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02) |
|
|
* [grimjim/Llama-3-Luminurse-v0.2-OAS-8B](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B) |
|
|
|
|
|
using [NurtureAI/Meta-Llama-3-8B-Instruct-32k](https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-32k) as the base. |
|
|
|
|
|
- Made with RP/ERP in mind. |
|
|
- Supports a context length of 32k. |
|
|
- Strong at instruction following. |
|
|
- Fully uncensored. |
|
|
*** |
|
|
Recommend Preset Settings: |
|
|
``` |
|
|
Top K: 40 |
|
|
Top P: 0.95 |
|
|
Min P: 0.075 |
|
|
Rep Pen: 1.05 |
|
|
Rep Pen Range: 2048 |
|
|
Frequency Pen: 0.50 |
|
|
Presence Pen: 0.15 |
|
|
``` |
|
|
*** |
|
|
### GGUF |
|
|
|
|
|
[mradermacher](https://huggingface.co/mradermacher): [L3-Inca-8B-v0.5-GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF) |
|
|
|