| | --- |
| | license: apache-2.0 |
| | base_model: PlanTL-GOB-ES/roberta-base-bne |
| | tags: |
| | - generated_from_keras_callback |
| | model-index: |
| | - name: RafaelMayer/roberta-copec-1 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information Keras had access to. You should |
| | probably proofread and complete it, then remove this comment. --> |
| |
|
| | # RafaelMayer/roberta-copec-1 |
| |
|
| | This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Train Loss: 0.6572 |
| | - Validation Loss: 0.6316 |
| | - Train Accuracy: 0.8235 |
| | - Train Precision: [0. 0.82352941] |
| | - Train Precision W: 0.6782 |
| | - Train Recall: [0. 1.] |
| | - Train Recall W: 0.8235 |
| | - Train F1: [0. 0.90322581] |
| | - Train F1 W: 0.7438 |
| | - Epoch: 1 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} |
| | - training_precision: float32 |
| | |
| | ### Training results |
| | |
| | | Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch | |
| | |:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:-----------------------:|:----------:|:-----:| |
| | | 0.6572 | 0.6316 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 1 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - Transformers 4.32.1 |
| | - TensorFlow 2.12.0 |
| | - Datasets 2.14.4 |
| | - Tokenizers 0.13.3 |
| | |