|
|
--- |
|
|
license: mit |
|
|
base_model: roberta-base |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
model-index: |
|
|
- name: tapt_helpfulness_base_pretraining_model_final |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# tapt_helpfulness_base_pretraining_model_final |
|
|
|
|
|
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 1.4543 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 0.0001 |
|
|
- train_batch_size: 21 |
|
|
- eval_batch_size: 21 |
|
|
- seed: 42 |
|
|
- gradient_accumulation_steps: 2 |
|
|
- total_train_batch_size: 42 |
|
|
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 |
|
|
- lr_scheduler_type: linear |
|
|
- num_epochs: 100 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|
|:-------------:|:-----:|:-----:|:---------------:| |
|
|
| 1.7697 | 1.0 | 232 | 1.5904 | |
|
|
| 1.6633 | 2.0 | 465 | 1.5650 | |
|
|
| 1.6314 | 3.0 | 697 | 1.5461 | |
|
|
| 1.594 | 4.0 | 930 | 1.5243 | |
|
|
| 1.5766 | 5.0 | 1162 | 1.5312 | |
|
|
| 1.5451 | 6.0 | 1395 | 1.5194 | |
|
|
| 1.5271 | 7.0 | 1627 | 1.5034 | |
|
|
| 1.5038 | 8.0 | 1860 | 1.5080 | |
|
|
| 1.4906 | 9.0 | 2092 | 1.4942 | |
|
|
| 1.4801 | 10.0 | 2325 | 1.4783 | |
|
|
| 1.4638 | 11.0 | 2557 | 1.4900 | |
|
|
| 1.4407 | 12.0 | 2790 | 1.4820 | |
|
|
| 1.4285 | 13.0 | 3022 | 1.4692 | |
|
|
| 1.4177 | 14.0 | 3255 | 1.4698 | |
|
|
| 1.4051 | 15.0 | 3487 | 1.4790 | |
|
|
| 1.3899 | 16.0 | 3720 | 1.4800 | |
|
|
| 1.3832 | 17.0 | 3952 | 1.4730 | |
|
|
| 1.3706 | 18.0 | 4185 | 1.4656 | |
|
|
| 1.3617 | 19.0 | 4417 | 1.4625 | |
|
|
| 1.3464 | 20.0 | 4650 | 1.4699 | |
|
|
| 1.3449 | 21.0 | 4882 | 1.4641 | |
|
|
| 1.3258 | 22.0 | 5115 | 1.4554 | |
|
|
| 1.3248 | 23.0 | 5347 | 1.4595 | |
|
|
| 1.3119 | 24.0 | 5580 | 1.4643 | |
|
|
| 1.3087 | 25.0 | 5812 | 1.4589 | |
|
|
| 1.2942 | 26.0 | 6045 | 1.4633 | |
|
|
| 1.2875 | 27.0 | 6277 | 1.4517 | |
|
|
| 1.2731 | 28.0 | 6510 | 1.4506 | |
|
|
| 1.2727 | 29.0 | 6742 | 1.4501 | |
|
|
| 1.261 | 30.0 | 6975 | 1.4492 | |
|
|
| 1.2559 | 31.0 | 7207 | 1.4553 | |
|
|
| 1.2437 | 32.0 | 7440 | 1.4429 | |
|
|
| 1.2404 | 33.0 | 7672 | 1.4456 | |
|
|
| 1.2301 | 34.0 | 7905 | 1.4497 | |
|
|
| 1.2277 | 35.0 | 8137 | 1.4400 | |
|
|
| 1.2154 | 36.0 | 8370 | 1.4491 | |
|
|
| 1.2118 | 37.0 | 8602 | 1.4521 | |
|
|
| 1.2022 | 38.0 | 8835 | 1.4362 | |
|
|
| 1.2027 | 39.0 | 9067 | 1.4431 | |
|
|
| 1.1883 | 40.0 | 9300 | 1.4526 | |
|
|
| 1.1861 | 41.0 | 9532 | 1.4596 | |
|
|
| 1.1747 | 42.0 | 9765 | 1.4390 | |
|
|
| 1.1708 | 43.0 | 9997 | 1.4501 | |
|
|
| 1.1636 | 44.0 | 10230 | 1.4549 | |
|
|
| 1.1623 | 45.0 | 10462 | 1.4616 | |
|
|
| 1.1569 | 46.0 | 10695 | 1.4379 | |
|
|
| 1.149 | 47.0 | 10927 | 1.4492 | |
|
|
| 1.1401 | 48.0 | 11160 | 1.4502 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.38.2 |
|
|
- Pytorch 2.2.1+cu121 |
|
|
- Datasets 2.18.0 |
|
|
- Tokenizers 0.15.2 |
|
|
|