| | --- |
| | license: mit |
| | datasets: pt-sk/toxic_classification |
| | tags: |
| | - PPO |
| | - RLHF |
| | pipeline_tag: text-generation |
| | --- |
| | Aligning the model using Proximal Policy Optimization (PPO). The goal is to train the model to generate non-toxic reviews. The training process utilizes the `trl` library for reinforcement learning, the `transformers` library for model handling, and `datasets` for dataset management. |
| | Implementation code is available here: [GitHub](https://github.com/sathishkumar67/GPT-2-Non-Toxic-RLHF) |
| | ```python |
| | # Load model and tokenizer directly |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("pt-sk/GPT2_NonToxic") |
| | model = AutoModelForCausalLM.from_pretrained("pt-sk/GPT2_NonToxic") |
| | |
| | # Example usage |
| | input_text = "The movie was fantastic" |
| | inputs = tokenizer(input_text, return_tensors='pt') |
| | outputs = model.generate(**inputs) |
| | |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| | ``` |