new2 / README.md
Anwaarma's picture
new2
30af07f verified
metadata
library_name: peft
base_model: Anwaarma/edos_taskB_llama3b_merged2_FINAL
tags:
  - base_model:adapter:Anwaarma/edos_taskB_llama3b_merged2_FINAL
  - lora
  - transformers
metrics:
  - accuracy
model-index:
  - name: new2
    results: []

new2

This model is a fine-tuned version of Anwaarma/edos_taskB_llama3b_merged2_FINAL on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5177
  • Accuracy: 0.5546
  • F1 Macro: 0.5101
  • F1 Micro: 0.5546

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.06
  • num_epochs: 40
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Macro F1 Micro
1.3186 1.8598 100 1.4746 0.5802 0.5325 0.5802
1.1161 3.7103 200 1.4088 0.5947 0.5390 0.5947
0.9134 5.5607 300 1.4204 0.5638 0.4999 0.5638
0.7884 7.4112 400 1.3747 0.5638 0.5132 0.5638
0.7428 9.2617 500 1.3314 0.5638 0.5076 0.5638

Framework versions

  • PEFT 0.17.1
  • Transformers 4.56.2
  • Pytorch 2.8.0+cu126
  • Datasets 4.1.1
  • Tokenizers 0.22.0