|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: bert-base-cased |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
metrics: |
|
|
- accuracy |
|
|
model-index: |
|
|
- name: mitre-bert-base-cased |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# mitre-bert-base-cased |
|
|
|
|
|
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 1.0145 |
|
|
- Accuracy: 0.6994 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 5e-05 |
|
|
- train_batch_size: 16 |
|
|
- eval_batch_size: 16 |
|
|
- seed: 42 |
|
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
|
- lr_scheduler_type: linear |
|
|
- num_epochs: 10 |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | |
|
|
|:-------------:|:-----:|:----:|:---------------:|:--------:| |
|
|
| 1.2761 | 0.68 | 500 | 0.8453 | 0.6864 | |
|
|
| 0.7448 | 1.36 | 1000 | 0.7566 | 0.7164 | |
|
|
| 0.6056 | 2.04 | 1500 | 0.7187 | 0.7318 | |
|
|
| 0.4763 | 2.72 | 2000 | 0.7134 | 0.7307 | |
|
|
| 0.4276 | 3.41 | 2500 | 0.7604 | 0.7420 | |
|
|
| 0.3855 | 4.09 | 3000 | 0.7493 | 0.7362 | |
|
|
| 0.3303 | 4.77 | 3500 | 0.7727 | 0.7423 | |
|
|
| 0.313 | 5.45 | 4000 | 0.8053 | 0.7263 | |
|
|
| 0.2948 | 6.13 | 4500 | 0.8555 | 0.7280 | |
|
|
| 0.2779 | 6.81 | 5000 | 0.8839 | 0.7127 | |
|
|
| 0.2526 | 7.49 | 5500 | 0.9097 | 0.7144 | |
|
|
| 0.2576 | 8.17 | 6000 | 0.9421 | 0.7171 | |
|
|
| 0.2461 | 8.86 | 6500 | 0.9821 | 0.7018 | |
|
|
| 0.2357 | 9.54 | 7000 | 1.0145 | 0.6994 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.38.2 |
|
|
- Pytorch 2.2.1+cu121 |
|
|
- Datasets 2.18.0 |
|
|
- Tokenizers 0.15.2 |
|
|
|