Model Card for Model ID

Reward model trained for PPO for VK NLP course.

Model Details

Model Description

The model is LLM HuggingFaceTB/SmolLM-135M-Instruct.

Trained only last linear layer.

Model trained with TRL for PPO trainig.

Model Sources

Getting start

device = torch.device("cuda")

tokenizer = AutoTokenizer.from_pretrained("dmitry315/llm-course-hw2-reward-model")
reward_model = AutoModelForSequenceClassification.from_pretrained("dmitry315/llm-course-hw2-reward-model")

msgs = [
    {"role": "user", "content": "<prompt>"},
    {"role": "assistant", "content": <LLM answer>}
]

inputs_rejected = tokenizer.apply_chat_template(msgs, tokenize=False)
inputs_rejected = tokenizer(inputs_rejected, return_tensors="pt").to(DEVICE)

score = reward_model(**inputs_chosen).logits[0].cpu().detach()
print(score)
# > torch.tensor([[<score>]])
Downloads last month
9
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including dmitry315/llm-course-hw2-reward-model