Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
tags:
|
| 4 |
- unsloth
|
| 5 |
- Uncensored
|
|
@@ -8,7 +8,8 @@ tags:
|
|
| 8 |
- unsloth
|
| 9 |
- llama
|
| 10 |
- trl
|
| 11 |
-
-
|
|
|
|
| 12 |
datasets:
|
| 13 |
- openerotica/mixed-rp
|
| 14 |
- kingbri/PIPPA-shareGPT
|
|
@@ -34,14 +35,14 @@ library_name: peft
|
|
| 34 |
Are Only Used for Training purposes, if you seek to train or Distill A Llama Model to
|
| 35 |
Force it to generate Uncensored Content then please do so with care and ethical considerations
|
| 36 |
|
| 37 |
-
-
|
| 38 |
- **Developed by:** N-Bot-Int
|
| 39 |
- **License:** apache-2.0
|
| 40 |
- **Parent Model from model:** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
| 41 |
-
- **Sequential Trained from Model:** N-Bot-Int/
|
| 42 |
- **Dataset Combined Using:** Mosher-R1(Propietary Software)
|
| 43 |
|
| 44 |
-
-
|
| 45 |
- Feel free to support by Emailing me: <link src="mailto:nexus.networkinteractives@gmail.com">nexus.networkinteractives@gmail.com</link>
|
| 46 |
|
| 47 |
- # Notice
|
|
@@ -58,9 +59,9 @@ library_name: peft
|
|
| 58 |
- 500 steps
|
| 59 |
- Mixed-RP Startup Dataset
|
| 60 |
- 200 steps
|
| 61 |
-
- PIPPA-ShareGPT for
|
| 62 |
- 150 steps(Re-fining)
|
| 63 |
-
- PIPPA-ShareGPT to further increase weight of
|
| 64 |
|
| 65 |
- Finetuning tool:
|
| 66 |
- Unsloth AI
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
tags:
|
| 4 |
- unsloth
|
| 5 |
- Uncensored
|
|
|
|
| 8 |
- unsloth
|
| 9 |
- llama
|
| 10 |
- trl
|
| 11 |
+
- roleplay
|
| 12 |
+
- conversational
|
| 13 |
datasets:
|
| 14 |
- openerotica/mixed-rp
|
| 15 |
- kingbri/PIPPA-shareGPT
|
|
|
|
| 35 |
Are Only Used for Training purposes, if you seek to train or Distill A Llama Model to
|
| 36 |
Force it to generate Uncensored Content then please do so with care and ethical considerations
|
| 37 |
|
| 38 |
+
- OpenElla3B is
|
| 39 |
- **Developed by:** N-Bot-Int
|
| 40 |
- **License:** apache-2.0
|
| 41 |
- **Parent Model from model:** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
| 42 |
+
- **Sequential Trained from Model:** N-Bot-Int/OpenElla3-Llama3.2A
|
| 43 |
- **Dataset Combined Using:** Mosher-R1(Propietary Software)
|
| 44 |
|
| 45 |
+
- OpenElla3B Is **NOT YET RANKED WITH ANY METRICS**
|
| 46 |
- Feel free to support by Emailing me: <link src="mailto:nexus.networkinteractives@gmail.com">nexus.networkinteractives@gmail.com</link>
|
| 47 |
|
| 48 |
- # Notice
|
|
|
|
| 59 |
- 500 steps
|
| 60 |
- Mixed-RP Startup Dataset
|
| 61 |
- 200 steps
|
| 62 |
+
- PIPPA-ShareGPT for Increased Roleplaying capabilities
|
| 63 |
- 150 steps(Re-fining)
|
| 64 |
+
- PIPPA-ShareGPT to further increase weight of PIPPA and to override the noises
|
| 65 |
|
| 66 |
- Finetuning tool:
|
| 67 |
- Unsloth AI
|