File size: 2,908 Bytes
d67d101 2b0ef0f 6fc3c4f 7b7ac28 2b0ef0f d67d101 a206c5f d67d101 a206c5f 6ab3033 e755a83 7b7ac28 6fc3c4f 99cc6da 005caf3 671223e b8c352f 94c0a15 16714d1 1c2cb29 5b53266 46b2e15 42ffd48 c5e0a5c 036fd16 ef9c829 9e33441 2b0ef0f d67d101 a206c5f d67d101 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: letingliu/holder_type
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# letingliu/holder_type
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5101
- Validation Loss: 0.4941
- Train Accuracy: 0.8942
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6872 | 0.6614 | 0.6154 | 0 |
| 0.6474 | 0.6141 | 0.8365 | 1 |
| 0.5998 | 0.5594 | 0.8846 | 2 |
| 0.5464 | 0.5138 | 0.8942 | 3 |
| 0.5160 | 0.4941 | 0.8942 | 4 |
| 0.4997 | 0.4941 | 0.8942 | 5 |
| 0.4984 | 0.4941 | 0.8942 | 6 |
| 0.5082 | 0.4941 | 0.8942 | 7 |
| 0.5010 | 0.4941 | 0.8942 | 8 |
| 0.5084 | 0.4941 | 0.8942 | 9 |
| 0.5026 | 0.4941 | 0.8942 | 10 |
| 0.5065 | 0.4941 | 0.8942 | 11 |
| 0.5019 | 0.4941 | 0.8942 | 12 |
| 0.5066 | 0.4941 | 0.8942 | 13 |
| 0.4976 | 0.4941 | 0.8942 | 14 |
| 0.5072 | 0.4941 | 0.8942 | 15 |
| 0.5018 | 0.4941 | 0.8942 | 16 |
| 0.5097 | 0.4941 | 0.8942 | 17 |
| 0.5131 | 0.4941 | 0.8942 | 18 |
| 0.5101 | 0.4941 | 0.8942 | 19 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|