|
|
--- |
|
|
library_name: transformers |
|
|
license: cc-by-4.0 |
|
|
base_model: NbAiLab/nb-bert-base |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
metrics: |
|
|
- precision |
|
|
- recall |
|
|
- accuracy |
|
|
model-index: |
|
|
- name: nb-bert-edu-scorer |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# nb-bert-edu-scorer |
|
|
|
|
|
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on an unknown dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 0.7249 |
|
|
- Precision: 0.3908 |
|
|
- Recall: 0.3347 |
|
|
- F1 Macro: 0.3334 |
|
|
- Accuracy: 0.48 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 3e-05 |
|
|
- train_batch_size: 256 |
|
|
- eval_batch_size: 128 |
|
|
- seed: 0 |
|
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
|
- lr_scheduler_type: linear |
|
|
- num_epochs: 20 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Macro | Accuracy | |
|
|
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:--------:|:--------:| |
|
|
| No log | 0 | 0 | 2.4712 | 0.0986 | 0.1654 | 0.0877 | 0.3496 | |
|
|
| 0.7734 | 2.6882 | 1000 | 0.7629 | 0.3988 | 0.3258 | 0.3215 | 0.4652 | |
|
|
| 0.7602 | 5.3763 | 2000 | 0.7505 | 0.3938 | 0.3319 | 0.3284 | 0.4584 | |
|
|
| 0.7548 | 8.0645 | 3000 | 0.7345 | 0.3924 | 0.3345 | 0.3319 | 0.4758 | |
|
|
| 0.731 | 10.7527 | 4000 | 0.7300 | 0.3951 | 0.3359 | 0.3335 | 0.4756 | |
|
|
| 0.7481 | 13.4409 | 5000 | 0.7274 | 0.3957 | 0.3356 | 0.3337 | 0.4818 | |
|
|
| 0.7255 | 16.1290 | 6000 | 0.7263 | 0.3910 | 0.3361 | 0.3339 | 0.4754 | |
|
|
| 0.7371 | 18.8172 | 7000 | 0.7249 | 0.3908 | 0.3347 | 0.3334 | 0.48 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.53.2 |
|
|
- Pytorch 2.7.1+cu126 |
|
|
- Datasets 4.0.0 |
|
|
- Tokenizers 0.21.2 |
|
|
|