| | --- |
| | license: apache-2.0 |
| | tags: |
| | - generated_from_keras_callback |
| | base_model: hfl/chinese-roberta-wwm-ext |
| | model-index: |
| | - name: celera_relevance |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information Keras had access to. You should |
| | probably proofread and complete it, then remove this comment. --> |
| |
|
| | # celera_relevance |
| | |
| | This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Train Loss: 0.3072 |
| | - Train Sparse Categorical Accuracy: 0.8813 |
| | - Validation Loss: 0.4371 |
| | - Validation Sparse Categorical Accuracy: 0.8295 |
| | - Epoch: 2 |
| | |
| | ## Model description |
| | |
| | More information needed |
| | |
| | ## Intended uses & limitations |
| | |
| | More information needed |
| | |
| | ## Training and evaluation data |
| | |
| | More information needed |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} |
| | - training_precision: float32 |
| | |
| | ### Training results |
| | |
| | | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |
| | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| |
| | | 0.4060 | 0.8274 | 0.3665 | 0.8440 | 0 | |
| | | 0.3388 | 0.8594 | 0.3639 | 0.8585 | 1 | |
| | | 0.3072 | 0.8813 | 0.4371 | 0.8295 | 2 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - Transformers 4.16.0 |
| | - TensorFlow 2.7.0 |
| | - Datasets 1.18.1 |
| | - Tokenizers 0.11.0 |
| | |