metadata
library_name: transformers
tags:
- trl
- reward-trainer
Model Card for Model ID
Reward model trained for PPO for VK NLP course.
Model Details
Model Description
The model is LLM HuggingFaceTB/SmolLM-135M-Instruct.
Trained only last linear layer.
Model trained with TRL for PPO trainig.
Model Sources
- Pretrained Model: HuggingFaceTB/SmolLM-135M-Instruct
- Train Data: HumanLLMs/Human-Like-DPO-Dataset
Getting start
device = torch.device("cuda")
tokenizer = AutoTokenizer.from_pretrained("dmitry315/llm-course-hw2-reward-model")
reward_model = AutoModelForSequenceClassification.from_pretrained("dmitry315/llm-course-hw2-reward-model")
msgs = [
{"role": "user", "content": "<prompt>"},
{"role": "assistant", "content": <LLM answer>}
]
inputs_rejected = tokenizer.apply_chat_template(msgs, tokenize=False)
inputs_rejected = tokenizer(inputs_rejected, return_tensors="pt").to(DEVICE)
score = reward_model(**inputs_chosen).logits[0].cpu().detach()
print(score)
# > torch.tensor([[<score>]])