File size: 2,494 Bytes
a8c918a f6fedee a8c918a df39358 a8c918a 8c22e5c a8c918a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 | ---
license: apache-2.0
base_model: answerdotai/ModernBERT-base
model-index:
- name: ECBERT-base-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ECBERT-base-mlm
This model is a pretrained version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on 25,581 texts (available [here](https://huggingface.co/datasets/Graimond/ECBERT-mlm-dataset)) using MLM but not yet fine-tuned on the monetary policy sentiment analysis task.
The best model achieves the following results on an out-of-sample test set ([Graimond/ECBERT-idioms-dataset](https://huggingface.co/datasets/Graimond/ECBERT-idioms-dataset)):
- Accuracy: 40.00%
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
- Training data: [Graimond/ECBERT-mlm-dataset](https://huggingface.co/datasets/Graimond/ECBERT-mlm-dataset)
- Evaluation data: [Graimond/ECBERT-idioms-dataset](https://huggingface.co/datasets/Graimond/ECBERT-idioms-dataset)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- weight_decay=0.01
- per_device_train_batch_size=16
- seed: 42
- epochs: 20
### Training results
| Epoch | Training Loss | Validation Loss |
|-------|---------------|-----------------|
| 1 | 1.905000 | 1.903329 |
| 2 | 1.689700 | 1.764568 |
| 3 | 1.600900 | nan |
| 4 | 1.476500 | 1.683352 |
| 5 | 1.381200 | 1.629597 |
| 6 | 1.367300 | nan |
| 7 | 1.230300 | 1.628195 |
| 8 | 1.142700 | 1.567721 |
| 9 | 1.131800 | 1.618517 |
| 10 | 1.139700 | nan |
| 11 | 1.086200 | nan |
| 12 | 1.072500 | 1.560426 |
| 13 | 0.984800 | 1.556072 |
| 14 | 0.958500 | 1.606674 |
| 15 | 0.955600 | 1.619744 |
| 16 | 0.920500 | 1.581421 |
| 17 | 0.882300 | 1.535872 |
| 18 | 0.877900 | 1.565936 |
| 19 | 0.803100 | nan |
| 20 | 0.815700 | 1.604986 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0 |