Instructions to use clp/rlhf_reward_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use clp/rlhf_reward_model with PEFT:
from peft import PeftModel from transformers import AutoModelForSequenceClassification base_model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") model = PeftModel.from_pretrained(base_model, "clp/rlhf_reward_model") - Notebooks
- Google Colab
- Kaggle