| | ---
|
| | library_name: transformers
|
| | base_model: dccuchile/albert-base-spanish
|
| | tags:
|
| | - generated_from_trainer
|
| | metrics:
|
| | - accuracy
|
| | model-index:
|
| | - name: mi-super-modelo
|
| | results: []
|
| | ---
|
| |
|
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| | should probably proofread and complete it, then remove this comment. -->
|
| |
|
| | # mi-super-modelo
|
| |
|
| | This model is a fine-tuned version of [dccuchile/albert-base-spanish](https://huggingface.co/dccuchile/albert-base-spanish) on the None dataset.
|
| | It achieves the following results on the evaluation set:
|
| | - Loss: 1.5146
|
| | - Accuracy: 0.38
|
| |
|
| | ## Model description
|
| |
|
| | More information needed
|
| |
|
| | ## Intended uses & limitations
|
| |
|
| | More information needed
|
| |
|
| | ## Training and evaluation data
|
| |
|
| | More information needed
|
| |
|
| | ## Training procedure
|
| |
|
| | ### Training hyperparameters
|
| |
|
| | The following hyperparameters were used during training:
|
| | - learning_rate: 5e-05
|
| | - train_batch_size: 8
|
| | - eval_batch_size: 8
|
| | - seed: 42
|
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| | - lr_scheduler_type: linear
|
| | - num_epochs: 1
|
| |
|
| | ### Training results
|
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
| | |:-------------:|:------:|:----:|:---------------:|:--------:|
|
| | | 1.7601 | 0.0455 | 2 | 1.7148 | 0.3333 |
|
| | | 1.7063 | 0.0909 | 4 | 1.6888 | 0.3 |
|
| | | 1.5512 | 0.1364 | 6 | 1.6585 | 0.2933 |
|
| | | 1.5312 | 0.1818 | 8 | 1.6514 | 0.3 |
|
| | | 1.5006 | 0.2273 | 10 | 1.6581 | 0.3 |
|
| | | 1.6519 | 0.2727 | 12 | 1.6511 | 0.3067 |
|
| | | 1.6397 | 0.3182 | 14 | 1.6347 | 0.3 |
|
| | | 1.5275 | 0.3636 | 16 | 1.6183 | 0.3067 |
|
| | | 1.8253 | 0.4091 | 18 | 1.5949 | 0.3 |
|
| | | 1.7725 | 0.4545 | 20 | 1.5708 | 0.3 |
|
| | | 1.5334 | 0.5 | 22 | 1.5591 | 0.3067 |
|
| | | 1.3062 | 0.5455 | 24 | 1.5535 | 0.3133 |
|
| | | 1.4629 | 0.5909 | 26 | 1.5459 | 0.32 |
|
| | | 1.5431 | 0.6364 | 28 | 1.5408 | 0.3333 |
|
| | | 1.62 | 0.6818 | 30 | 1.5355 | 0.36 |
|
| | | 1.4165 | 0.7273 | 32 | 1.5299 | 0.36 |
|
| | | 1.6135 | 0.7727 | 34 | 1.5244 | 0.3733 |
|
| | | 1.5181 | 0.8182 | 36 | 1.5220 | 0.3733 |
|
| | | 1.3877 | 0.8636 | 38 | 1.5196 | 0.3733 |
|
| | | 1.6638 | 0.9091 | 40 | 1.5174 | 0.3733 |
|
| | | 1.4348 | 0.9545 | 42 | 1.5154 | 0.38 |
|
| | | 1.4226 | 1.0 | 44 | 1.5146 | 0.38 |
|
| |
|
| |
|
| | ### Framework versions
|
| |
|
| | - Transformers 4.45.2
|
| | - Pytorch 2.5.0
|
| | - Datasets 3.1.0
|
| | - Tokenizers 0.20.1
|
| |
|