OpenElla3-Llama3.2A / README.md
ItsMeDevRoland's picture
Update README.md
23dd5f8 verified
---
license: apache-2.0
tags:
- unsloth
- Uncensored
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- roleplay
- conversational
datasets:
- openerotica/mixed-rp
- kingbri/PIPPA-shareGPT
language:
- en
base_model:
- N-Bot-Int/OpenRP3B-Llama3.2
new_version: N-Bot-Int/OpenElla3-Llama3.2B
pipeline_tag: text-generation
library_name: peft
---
<a href="https://ibb.co/FbZyWMB7"><img src="https://raw.githubusercontent.com/ItsMeDevRoland/NexusBotWorkInteractives/refs/heads/main/image.webp" alt="image-1" border="0"></a>
# Llama3.2 - OpenElla3A
- OpenElla, is a Llama3.2 **3B** Parameter Model,
That is fine-tuned for Roleplaying purposes, even if it only have a limited Parameters.
This is achieved through Series of Dataset Finetuning, using 2 Dataset with different
Weight, Aiming to Counter Llama3.2's Generalist Approach and focusing On Specializing with
Roleplaying and Acting.
- OpenElla3A Excells in Outputting **RAW** and **UNCENSORED** Output
However **LACKS THE PROPER TRAINING FOR OBIDIENCE**, Due to this, OpenElla3 Model **A**
Are Only Used for Training purposes, if you seek to train or Distill A Llama Model to
Force it to generate Uncensored Content then please do so with care and ethical considerations
- OpenElla3B is
- **Developed by:** N-Bot-Int
- **License:** apache-2.0
- **Parent Model from model:** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
- **Sequential Trained from Model:** N-Bot-Int/OpenElla3-Llama3.2A
- **Dataset Combined Using:** Mosher-R1(Propietary Software)
- OpenElla3B Is **NOT YET RANKED WITH ANY METRICS**
- Feel free to support by Emailing me: <link src="mailto:nexus.networkinteractives@gmail.com">nexus.networkinteractives@gmail.com</link>
- # Notice
- **For a Good Experience, Please use**
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- # Detail card:
- Parameter
- 3 Billion Parameters
- (Please visit your GPU Vendor if you can Run 3B models)
- Training
- 500 steps
- Mixed-RP Startup Dataset
- 200 steps
- PIPPA-ShareGPT for Increased Roleplaying capabilities
- 150 steps(Re-fining)
- PIPPA-ShareGPT to further increase weight of PIPPA and to override the noises
- Finetuning tool:
- Unsloth AI
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
- Fine-tuned Using:
- Google Colab