File size: 3,398 Bytes
5590f99 91fb61c 5590f99 0395892 727dc24 34e56a2 455f7c7 517e67c cc16008 ab1831e 45804b7 7ccb11c cb600b6 400a838 9029881 a2393d9 3772f10 9079052 f519e97 adc8627 4e73312 25ff5e1 8de1fd0 007e9a7 c8011aa 8ac0cf6 6f0c528 586a6d3 7a45ae2 91fb61c 5590f99 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/DBERT_Fault_LR_v2.1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/DBERT_Fault_LR_v2.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1501
- Validation Loss: 0.6305
- Train Accuracy: 0.7179
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-06, 'decay_steps': 9120, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6963 | 0.6916 | 0.5128 | 0 |
| 0.6774 | 0.6929 | 0.5128 | 1 |
| 0.6631 | 0.7000 | 0.5128 | 2 |
| 0.6580 | 0.7070 | 0.5128 | 3 |
| 0.6409 | 0.7104 | 0.5128 | 4 |
| 0.6296 | 0.7015 | 0.5128 | 5 |
| 0.6115 | 0.6866 | 0.5128 | 6 |
| 0.5940 | 0.6573 | 0.5897 | 7 |
| 0.5616 | 0.6263 | 0.5897 | 8 |
| 0.5230 | 0.5886 | 0.6667 | 9 |
| 0.4890 | 0.5608 | 0.7179 | 10 |
| 0.4523 | 0.5386 | 0.7436 | 11 |
| 0.4307 | 0.5424 | 0.7179 | 12 |
| 0.4013 | 0.5261 | 0.7179 | 13 |
| 0.3893 | 0.4976 | 0.7436 | 14 |
| 0.3634 | 0.5459 | 0.6923 | 15 |
| 0.3337 | 0.4893 | 0.7436 | 16 |
| 0.3243 | 0.5490 | 0.7179 | 17 |
| 0.3083 | 0.5091 | 0.7179 | 18 |
| 0.2815 | 0.5457 | 0.7179 | 19 |
| 0.2654 | 0.5692 | 0.7179 | 20 |
| 0.2535 | 0.4808 | 0.7436 | 21 |
| 0.2504 | 0.5912 | 0.6923 | 22 |
| 0.2132 | 0.6228 | 0.6923 | 23 |
| 0.1962 | 0.5834 | 0.7179 | 24 |
| 0.2136 | 0.5261 | 0.7692 | 25 |
| 0.1895 | 0.6210 | 0.7179 | 26 |
| 0.1722 | 0.7140 | 0.7179 | 27 |
| 0.1580 | 0.6532 | 0.6923 | 28 |
| 0.1501 | 0.6305 | 0.7179 | 29 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|