metadata
tags:
- generated_from_trainer
datasets:
- Hartunka/processed_wikitext-103-raw-v1-rand-50
metrics:
- accuracy
model-index:
- name: tiny_bert_rand_50_v1
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: Hartunka/processed_wikitext-103-raw-v1-rand-50
type: Hartunka/processed_wikitext-103-raw-v1-rand-50
metrics:
- name: Accuracy
type: accuracy
value: 0.152172684635814
tiny_bert_rand_50_v1
This model is a fine-tuned version of on the Hartunka/processed_wikitext-103-raw-v1-rand-50 dataset. It achieves the following results on the evaluation set:
- Loss: 10.2437
- Accuracy: 0.1522
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 25
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 9.9989 | 4.1982 | 10000 | 10.2161 | 0.1500 |
| 9.4887 | 8.3963 | 20000 | 10.3731 | 0.1529 |
| 8.8108 | 12.5945 | 30000 | 11.0256 | 0.1540 |
| 7.9917 | 16.7926 | 40000 | 11.8612 | 0.1512 |
| 7.3605 | 20.9908 | 50000 | 12.6595 | 0.1514 |
Framework versions
- Transformers 4.40.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.19.1