OpenRP3B-Llama3.2 / README.md
ItsMeDevRoland's picture
Update README.md
e73082c verified
metadata
license: openrail
tags:
  - unsloth
  - openerotica
  - uncensored
language:
  - en
base_model:
  - meta-llama/Llama-3.2-3B-Instruct
name: Llama 3.2 - OpenRP 3B
description: >
  Llama 3.2 - OpenRP 3B is a fine-tuned version of the Meta Llama 3.2 3B
  Instruct model, optimized for role-playing (RP) tasks. This model was
  developed using the Unsloth AI training tool and fine-tuned over 60 epochs on
  two datasets: `openerotica/mixed-rp` (a diverse role-playing dataset from
  Openerotica) and `kingbri/PIPPA-shareGPT` (a supplementary dataset enhancing
  conversational RP capabilities). With 3 billion parameters, it delivers robust
  performance for uncensored, creative, and interactive applications.
datasets:
  - openerotica/mixed-rp
  - kingbri/PIPPA-shareGPT
training_tool: Unsloth AI
epochs: 60
prompt_framework: >
  This model adopts the Llama 3.2 prompting framework. For optimal performance
  and the best user experience, please adhere to this structure when crafting
  prompts.
parameters: 3B (3 billion)
new_version: N-Bot-Int/OpenElla3-Llama3.2B
library_name: peft
pipeline_tag: text-generation

OpenRP3 is now Discontinued, Use OpenElla3B for a more finer And Better AI Experience

Llama 3.2 - 3B OpenRP-A

"OpenRP-A Model" Llama 3.2 - OpenRP 3B is a fine-tuned version of the Meta Llama 3.2 3B Instruct model, optimized for role-playing (RP) tasks. This model was developed using the Unsloth AI training tool and fine-tuned over 60 epochs on two datasets: openerotica/mixed-rp (a diverse role-playing dataset from Openerotica) and kingbri/PIPPA-shareGPT (a supplementary dataset enhancing conversational RP capabilities). With 3 billion parameters, it delivers robust performance for uncensored, creative, and interactive applications.

Prompt framework

This model adopts the Llama 3.2 prompting framework. For optimal performance and the best user experience, please adhere to this structure when crafting prompts.

Parameter

3B Model

Training

  • 60 steps per Dataset
  • Unsloth AI
  • Google Colab