|
|
--- |
|
|
license: mit |
|
|
base_model: microsoft/mdeberta-v3-base |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
metrics: |
|
|
- accuracy |
|
|
- f1 |
|
|
- precision |
|
|
- recall |
|
|
model-index: |
|
|
- name: mdeberta-v3-base_binary_2_seed42_NL-IT |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# mdeberta-v3-base_binary_2_seed42_NL-IT |
|
|
|
|
|
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 0.5350 |
|
|
- Accuracy: 0.7300 |
|
|
- F1: 0.7331 |
|
|
- Precision: 0.7389 |
|
|
- Recall: 0.7300 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 5e-06 |
|
|
- train_batch_size: 16 |
|
|
- eval_batch_size: 16 |
|
|
- seed: 42 |
|
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
|
- lr_scheduler_type: linear |
|
|
- lr_scheduler_warmup_steps: 200 |
|
|
- num_epochs: 10 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |
|
|
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| |
|
|
| 0.6574 | 0.2105 | 100 | 0.6380 | 0.6667 | 0.5333 | 0.4444 | 0.6667 | |
|
|
| 0.6439 | 0.4211 | 200 | 0.6327 | 0.6667 | 0.5333 | 0.4444 | 0.6667 | |
|
|
| 0.6343 | 0.6316 | 300 | 0.5922 | 0.6690 | 0.5409 | 0.6958 | 0.6690 | |
|
|
| 0.601 | 0.8421 | 400 | 0.6094 | 0.6797 | 0.5854 | 0.6703 | 0.6797 | |
|
|
| 0.5767 | 1.0526 | 500 | 0.5627 | 0.7117 | 0.7012 | 0.6992 | 0.7117 | |
|
|
| 0.5517 | 1.2632 | 600 | 0.5363 | 0.7200 | 0.7070 | 0.7069 | 0.7200 | |
|
|
| 0.5511 | 1.4737 | 700 | 0.5401 | 0.7094 | 0.7161 | 0.7338 | 0.7094 | |
|
|
| 0.53 | 1.6842 | 800 | 0.5442 | 0.7141 | 0.7222 | 0.7592 | 0.7141 | |
|
|
| 0.5194 | 1.8947 | 900 | 0.5258 | 0.7319 | 0.7366 | 0.7464 | 0.7319 | |
|
|
| 0.4867 | 2.1053 | 1000 | 0.5259 | 0.7272 | 0.7317 | 0.7405 | 0.7272 | |
|
|
| 0.4717 | 2.3158 | 1100 | 0.5466 | 0.7331 | 0.7279 | 0.7258 | 0.7331 | |
|
|
| 0.4657 | 2.5263 | 1200 | 0.5385 | 0.7355 | 0.7381 | 0.7420 | 0.7355 | |
|
|
| 0.4683 | 2.7368 | 1300 | 0.5309 | 0.7461 | 0.7470 | 0.7480 | 0.7461 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.40.2 |
|
|
- Pytorch 2.1.2 |
|
|
- Datasets 2.18.0 |
|
|
- Tokenizers 0.19.1 |
|
|
|