File size: 3,738 Bytes
cd8268d 4caa775 cd8268d 0be289c cd8268d 4639a1c 0be289c cd8268d 0be289c 7514e31 0be289c 67308b1 cd8268d cc60d72 0be289c 4639a1c 0be289c 67308b1 c358194 67308b1 0be289c cd8268d 0be289c cd8268d 0be289c 4639a1c 0be289c 4639a1c 0be289c cd8268d 0be289c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: apache-2.0
tags:
- unsloth
- Uncensored
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- roleplay
- conversational
datasets:
- openerotica/mixed-rp
- kingbri/PIPPA-shareGPT
- flammenai/character-roleplay-DPO
language:
- en
base_model:
- N-Bot-Int/OpenRP3B-Llama3.2
new_version: N-Bot-Int/OpenElla3-Llama3.2B-V2
pipeline_tag: text-generation
library_name: peft
metrics:
- character
---
<a href="https://ibb.co/GvDjFcVp"><img src="https://raw.githubusercontent.com/ItsMeDevRoland/NexusBotWorkInteractives/refs/heads/main/image%20(1).webp" alt="image" border="0"></a>
# Llama3.2 - OpenElla3B
- OpenElla Model **B**, is a Llama3.2 **3B** Parameter Model,
That is fine-tuned for Roleplaying purposes, even if it only have a limited Parameters.
This is achieved through Series of Dataset Finetuning, using 3 Dataset with different
Weight, Aiming to Counter Llama3.2's Generalist Approach and focusing On Specializing with
Roleplaying and Acting.
- OpenElla3A Excells in Outputting **RAW** and **UNCENSORED** Output And Acknowledges OpenElla3A's weakness
for Following Prompt, Due to this, the model is re-finetuned, which **solves the issue with OpenElla3A's
Disobidience**, This allows the Model to Engage in Uncensored response and with appropriate responses, rivaling
its older models
- OpenElla3B contains more Fine-tuned Dataset so please Report any issues found through our email
<link src="mailto:nexus.networkinteractives@gmail.com">nexus.networkinteractives@gmail.com</link>,
about any overfitting, or improvements for the future Model **C**,
Once again feel free to Modify the LORA to your likings, However please consider Adding this Page
for credits and if you'll increase its **Dataset**, then please handle it with care and ethical considerations
- OpenElla3B is
- **Developed by:** N-Bot-Int
- **License:** apache-2.0
- **Parent Model from model:** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
- **Sequential Trained from Model:** N-Bot-Int/OpenElla3-Llama3.2A
- **Dataset Combined Using:** Mosher-R1(Propietary Software)
- OpenElla3B Official Metric Score

- Metrics Made By **ItsMeDevRoland**
Which compares:
- **Deepseek R1 3B GGUF**
- **Dolphin 3B GGUF**
- **Hermes 3b Llama GGUFF**
- **OpenElla3-Llama3.2B GGUFF**
Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab),
To Properly Showcase the differences and strength of the Models
- **THIS MODEL EXCELLS IN LONGER PROMPT AND STAYING IN CHARACTER BUT LAGS BEHIND DEEPSEEK-R1**
- # Notice
- **For a Good Experience, Please use**
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- # Detail card:
- Parameter
- 3 Billion Parameters
- (Please visit your GPU Vendor if you can Run 3B models)
- Training
- 500 steps
- Mixed-RP Startup Dataset
- 200 steps
- PIPPA-ShareGPT for Increased Roleplaying capabilities
- 150 steps(Re-fining)
- PIPPA-ShareGPT to further increase weight of PIPPA and to override the noises
- 500 steps(Lower LR)
- Character-roleplay-DO to further encourage the Model to respond appropriately with the RP scenario
- Finetuning tool:
- Unsloth AI
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
- Fine-tuned Using:
- Google Colab |