VK LLM Course. Задание #3. Дообучение DoRA

Модель — дообученная OuteAI/Lite-Oute-1-300M-Instruct на датасете cardiffnlp/tweet_eval. Дообучаем модель с помощью PEFT определять тональность твитов. Модель адаптируется к специфичному стилю текстов твитов по датасету.

Параметры:

  • BATCH_SIZE = 128
  • LEARNING_RATE = 1e-4
  • NUM_EPOCHS = 1
  • rank = 8, alpha = 16, target_submodules = ["k_proj", "v_proj"]

Дообучение проводилось на NVIDIA A100.

Метрики качества

До дообучения: Macro F1 = 0.17 image/png

После дообучения DoRA: Macro F1 = 0.51 image/png

Пример генерации после DoRA

==========
Chase Headley's RBI double in the 8th inning off David Price snapped a Yankees streak of 33 consecutive scoreless innings against Blue Jays
neutral
assistant
neutral 
 assistant
neutral 

==========
@user Alciato: Bee will invest 150 million in January, another 200 in the Summer and plans to bring Messi by 2017"
positive
assistant
neutral 
 assistant
neutral 

==========
Downloads last month
9
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including pbedrin/llm-course-hw3-dora