VK LLM
Collection
LLM models trained for VK course.
•
9 items
•
Updated
Reward model trained for PPO for VK NLP course.
The model is LLM HuggingFaceTB/SmolLM-135M-Instruct.
Trained only last linear layer.
Model trained with TRL for PPO trainig.
device = torch.device("cuda")
tokenizer = AutoTokenizer.from_pretrained("dmitry315/llm-course-hw2-reward-model")
reward_model = AutoModelForSequenceClassification.from_pretrained("dmitry315/llm-course-hw2-reward-model")
msgs = [
{"role": "user", "content": "<prompt>"},
{"role": "assistant", "content": <LLM answer>}
]
inputs_rejected = tokenizer.apply_chat_template(msgs, tokenize=False)
inputs_rejected = tokenizer(inputs_rejected, return_tensors="pt").to(DEVICE)
score = reward_model(**inputs_chosen).logits[0].cpu().detach()
print(score)
# > torch.tensor([[<score>]])