metadata
library_name: transformers
datasets:
- HumanLLMs/Human-Like-DPO-Dataset
language:
- en
base_model:
- HuggingFaceTB/SmolLM-135M-Instruct
Описание модели
В рамках домашнего задания по курсу "Современный NLP. Большие языковые модели" от vk.education было реализовано дообучение модели методом Direct Preference Optimization (DPO)
Как использовать:
tokenizer = AutoTokenizer.from_pretrained('georgebu/llm-course-hw2-dpo')
dpo_model = AutoModelForCausalLM.from_pretrained(georgebu/llm-course-hw2-dpo)
messages = [{"role": "user", "content": "What's your morning routine like?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = dpo_model.generate(model_inputs.input_ids, max_new_tokens=256, do_sample=False)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Примеры генерации:
Промт: "What is your morning routine like?"
Ответ модели: I'm excited to help you with your morning routine. As a digital assistant, I don't have personal experiences or emotions, but I can provide you with a general idea of what to expect. Please feel free to adjust the content to fit your needs.
Morning Routine (10-15 minutes)
- Get Ready:
- Wash your face with soap and water.
- Apply a facial wash or moisturizer.
- Use a gentle cleanser.
- Massage the cleanser into your skin.
- Brush and Towel:
- Brush your teeth with a soft-bristled toothbrush.
- Use a gentle circular motion.
- Rinse with warm water.
- Get Ready:
- Put on your favorite outfit.
- Choose a comfortable and outfit.
- Remove any shoes.