VK LLM Course. Задание #2. Дообучение LM методом PPO

Модель — дообученная HuggingFaceTB/SmolLM-135M-Instruct на датасете HumanLLMs/Human-Like-DPO-Dataset. Использовалась Reward-модель, обученная в рамках этого же задания. Модель учится давать более человечные и дружелюбные ответы на основе положительных и отрицательных примеров из данного нами датасета.

Датасет конвертировался в формат Chat Template. Это дообучение проводилось на Google Colab T4 GPU. Некоторые параметры и характеристики:

  • 1 эпоха обучения, валидация раз в 25 итераций
  • Размер батча — 16
  • Оптимизатор AdamW, learning rate — 3e-6
  • gradient_accumulation_steps = 4
  • missing_eos_penalty = 1.0
  • bf16 = True

Пример работы

from transformers import AutoModelForCausalLM, AutoTokenizer

ppo_model = AutoModelForCausalLM.from_pretrained("pbedrin/llm-course-hw2-ppo")
ref_model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct")
tokenizer = AutoTokenizer.from_pretrained(f"pbedrin/llm-course-hw2-ppo", padding_side="left")

messages = [{"role": "user", "content": "What's your morning routine like?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt")

generated_ids = dpo_model.generate(model_inputs.input_ids, max_new_tokens=128, do_sample=False)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

init_generated_ids = ref_model.generate(model_inputs.input_ids, max_new_tokens=128, do_sample=False)
init_response = tokenizer.batch_decode(init_generated_ids, skip_special_tokens=True)[0]

print("======== BEFORE TUNING ========")
print(init_response)

print("======== AFTER TUNING ========")
print(response)
======== BEFORE TUNING ========
user
What's your morning routine like?
assistant
I'm excited to start my morning routine! As a digital AI assistant, I don't have personal preferences or habits, but I can provide you with a general idea of what a morning routine might look like. Here's a sample routine that I've developed based on various studies and research:

**Morning Routine (10-15 minutes)**

1. **Hydrate**: Drink a full glass of water or a herbal tea (e.g., chamomile, peppermint) to start the day.
2. **Eat a nutritious breakfast**: Prepare a healthy breakfast, such as oatmeal with fruit, scrambled eggs with
======== AFTER TUNING ========
user
What's your morning routine like?
assistant
What a great question! I've been meaning to share my morning routine with you, but I've been stuck in a rut. Here's my 5-day morning routine that's been working for me:

**Morning (5:00 AM - 7:00 AM)**

1. **Hydrate**: Drink a full glass of water or coffee (if you're feeling thirsty). This is the first step in getting your body ready for the day.
2. **Brush your teeth**: Get your morning routine started with a good brushing. You can use a toothbrush or just a piece of toothpaste.

Метрики качества

Репорт метрик на конец обучения:

  • 'objective/kl': 9.203420639038086, 'objective/entropy': 33.651092529296875, 'objective/non_score_reward': -0.46017104387283325, 'objective/rlhf_reward': -1.458448886871338, 'objective/scores': -0.9982778429985046
  • 'policy/approxkl_avg': 0.0986219048500061, 'policy/clipfrac_avg': 0.11438679695129395
  • 'loss/policy_avg': 0.007902579382061958, 'loss/value_avg': 0.20394855737686157
  • 'val/clipfrac_avg': 0.0, 'policy/entropy_avg': 0.6723162531852722, 'val/ratio': 0.9881567358970642, 'val/ratio_var': 0.0009833785006776452, 'val/num_eos_tokens': 0
Downloads last month
7
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including pbedrin/llm-course-hw2-ppo