|
|
--- |
|
|
base_model: roberta-large |
|
|
license: mit |
|
|
metrics: |
|
|
- accuracy |
|
|
- f1 |
|
|
- precision |
|
|
- recall |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
model-index: |
|
|
- name: robertaL_ner |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/adam-fendri/huggingface/runs/9naabn8w) |
|
|
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/adam-fendri/huggingface/runs/9naabn8w) |
|
|
# robertaL_ner |
|
|
|
|
|
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 0.2206 |
|
|
- Accuracy: 0.9558 |
|
|
- F1: 0.9558 |
|
|
- Precision: 0.9560 |
|
|
- Recall: 0.9558 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 3e-05 |
|
|
- train_batch_size: 16 |
|
|
- eval_batch_size: 16 |
|
|
- seed: 42 |
|
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
|
- lr_scheduler_type: linear |
|
|
- lr_scheduler_warmup_steps: 200 |
|
|
- num_epochs: 20 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |
|
|
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| |
|
|
| 1.331 | 1.0 | 49 | 1.0308 | 0.6698 | 0.6131 | 0.6094 | 0.6698 | |
|
|
| 0.8659 | 2.0 | 98 | 0.6391 | 0.7992 | 0.7929 | 0.7932 | 0.7992 | |
|
|
| 0.5051 | 3.0 | 147 | 0.5164 | 0.8446 | 0.8389 | 0.8410 | 0.8446 | |
|
|
| 0.4183 | 4.0 | 196 | 0.3752 | 0.8840 | 0.8827 | 0.8824 | 0.8840 | |
|
|
| 0.4014 | 5.0 | 245 | 0.3487 | 0.8946 | 0.8926 | 0.8921 | 0.8946 | |
|
|
| 0.2955 | 6.0 | 294 | 0.3009 | 0.9040 | 0.9049 | 0.9068 | 0.9040 | |
|
|
| 0.2525 | 7.0 | 343 | 0.2478 | 0.9303 | 0.9303 | 0.9304 | 0.9303 | |
|
|
| 0.2381 | 8.0 | 392 | 0.2498 | 0.9240 | 0.9243 | 0.9248 | 0.9240 | |
|
|
| 0.2255 | 9.0 | 441 | 0.2214 | 0.9321 | 0.9318 | 0.9323 | 0.9321 | |
|
|
| 0.1463 | 10.0 | 490 | 0.2258 | 0.9397 | 0.9396 | 0.9396 | 0.9397 | |
|
|
| 0.151 | 11.0 | 539 | 0.2271 | 0.9421 | 0.9421 | 0.9422 | 0.9421 | |
|
|
| 0.1213 | 12.0 | 588 | 0.2146 | 0.9500 | 0.9498 | 0.9499 | 0.9500 | |
|
|
| 0.1166 | 13.0 | 637 | 0.2162 | 0.9494 | 0.9493 | 0.9496 | 0.9494 | |
|
|
| 0.121 | 14.0 | 686 | 0.2442 | 0.9421 | 0.9424 | 0.9428 | 0.9421 | |
|
|
| 0.0841 | 15.0 | 735 | 0.2206 | 0.9558 | 0.9558 | 0.9560 | 0.9558 | |
|
|
| 0.0485 | 16.0 | 784 | 0.2555 | 0.9452 | 0.9452 | 0.9454 | 0.9452 | |
|
|
| 0.0598 | 17.0 | 833 | 0.2338 | 0.9558 | 0.9558 | 0.9559 | 0.9558 | |
|
|
| 0.0462 | 18.0 | 882 | 0.2443 | 0.9549 | 0.9549 | 0.9550 | 0.9549 | |
|
|
| 0.0323 | 19.0 | 931 | 0.2531 | 0.9540 | 0.9540 | 0.9542 | 0.9540 | |
|
|
| 0.0466 | 20.0 | 980 | 0.2509 | 0.9549 | 0.9549 | 0.9550 | 0.9549 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.42.3 |
|
|
- Pytorch 2.1.2 |
|
|
- Datasets 2.20.0 |
|
|
- Tokenizers 0.19.1 |
|
|
|