yigagilbert's picture
End of training
05ea8c6 verified
|
raw
history blame
5 kB
---
license: apache-2.0
base_model: google/t5-efficient-tiny
tags:
- generated_from_trainer
datasets:
- generator
metrics:
- precision
- recall
- f1
model-index:
- name: salt_language_Classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: generator
type: generator
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
- name: F1
type: f1
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# salt_language_Classification
This model is a fine-tuned version of [google/t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 20000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:---:|
| 0.0 | 0.025 | 500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.05 | 1000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.075 | 1500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.1 | 2000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.125 | 2500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.15 | 3000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.175 | 3500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.2 | 4000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.225 | 4500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.25 | 5000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.275 | 5500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.3 | 6000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.325 | 6500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.35 | 7000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.375 | 7500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.4 | 8000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.425 | 8500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.45 | 9000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.475 | 9500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.5 | 10000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.525 | 10500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.55 | 11000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.575 | 11500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.6 | 12000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.625 | 12500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.65 | 13000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.675 | 13500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.7 | 14000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.725 | 14500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.75 | 15000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.775 | 15500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.8 | 16000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.825 | 16500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.85 | 17000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.875 | 17500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.9 | 18000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.925 | 18500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.95 | 19000 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 0.975 | 19500 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 1.0 | 20000 | 0.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1