Model save
Browse files- README.md +18 -2
- emissions.csv +2 -0
- model.safetensors +1 -1
- runs/Jul10_16-25-06_dd8486c7a8d9/events.out.tfevents.1752164723.dd8486c7a8d9.24621.1 +3 -0
- runs/Jul10_16-39-45_dd8486c7a8d9/events.out.tfevents.1752165655.dd8486c7a8d9.40072.0 +3 -0
- runs/Jul10_16-39-45_dd8486c7a8d9/events.out.tfevents.1752170515.dd8486c7a8d9.40072.1 +3 -0
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -4,6 +4,8 @@ license: apache-2.0
|
|
| 4 |
base_model: openai/whisper-small
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
|
|
|
|
|
|
| 7 |
model-index:
|
| 8 |
- name: enenlhet-whisper
|
| 9 |
results: []
|
|
@@ -15,6 +17,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 15 |
# enenlhet-whisper
|
| 16 |
|
| 17 |
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
|
@@ -39,10 +44,21 @@ The following hyperparameters were used during training:
|
|
| 39 |
- seed: 42
|
| 40 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 41 |
- lr_scheduler_type: linear
|
| 42 |
-
- lr_scheduler_warmup_steps:
|
| 43 |
-
- training_steps:
|
| 44 |
- mixed_precision_training: Native AMP
|
| 45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
### Framework versions
|
| 47 |
|
| 48 |
- Transformers 4.48.0
|
|
|
|
| 4 |
base_model: openai/whisper-small
|
| 5 |
tags:
|
| 6 |
- generated_from_trainer
|
| 7 |
+
metrics:
|
| 8 |
+
- wer
|
| 9 |
model-index:
|
| 10 |
- name: enenlhet-whisper
|
| 11 |
results: []
|
|
|
|
| 17 |
# enenlhet-whisper
|
| 18 |
|
| 19 |
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
|
| 20 |
+
It achieves the following results on the evaluation set:
|
| 21 |
+
- Loss: 2.6704
|
| 22 |
+
- Wer: 350.3996
|
| 23 |
|
| 24 |
## Model description
|
| 25 |
|
|
|
|
| 44 |
- seed: 42
|
| 45 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 46 |
- lr_scheduler_type: linear
|
| 47 |
+
- lr_scheduler_warmup_steps: 200
|
| 48 |
+
- training_steps: 1000
|
| 49 |
- mixed_precision_training: Native AMP
|
| 50 |
|
| 51 |
+
### Training results
|
| 52 |
+
|
| 53 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
| 54 |
+
|:-------------:|:------:|:----:|:---------------:|:---------:|
|
| 55 |
+
| 3.4691 | 0.6349 | 200 | 3.4181 | 412.1878 |
|
| 56 |
+
| 2.8359 | 1.2698 | 400 | 2.9265 | 1983.0669 |
|
| 57 |
+
| 2.5813 | 1.9048 | 600 | 2.7429 | 869.3806 |
|
| 58 |
+
| 2.3921 | 2.5397 | 800 | 2.7003 | 761.1888 |
|
| 59 |
+
| 2.1266 | 3.1746 | 1000 | 2.6704 | 350.3996 |
|
| 60 |
+
|
| 61 |
+
|
| 62 |
### Framework versions
|
| 63 |
|
| 64 |
- Transformers 4.48.0
|
emissions.csv
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
timestamp,project_name,run_id,experiment_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,on_cloud,pue
|
| 2 |
+
2025-07-10T17:54:15,codecarbon,29f11b05-f5e5-4696-b691-eee3be744aa8,5b0fa12a-3dd7-45bb-9766-cc326314d9f1,4398.062264288,0.09624162954250555,2.188273465884778e-05,42.5,52.081347793114205,38.0,0.05173205740056678,0.10682776796214602,0.04586902942283713,0.20442885478554992,Singapore,SGP,,,,Linux-6.1.123+-x86_64-with-glibc2.35,3.11.13,3.0.2,12,Intel(R) Xeon(R) CPU @ 2.20GHz,1,1 x NVIDIA A100-SXM4-40GB,103.8507,1.2872,83.4760627746582,machine,N,1.0
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 966995080
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aecd40ef8f58fed5b97f03b71d3fe989a5fb484aff603762c74ba85853c35e5d
|
| 3 |
size 966995080
|
runs/Jul10_16-25-06_dd8486c7a8d9/events.out.tfevents.1752164723.dd8486c7a8d9.24621.1
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a32a4de35c0c107e652417fff317f412cb1974ab2346a9f18d4c3c38e47ad829
|
| 3 |
+
size 7089
|
runs/Jul10_16-39-45_dd8486c7a8d9/events.out.tfevents.1752165655.dd8486c7a8d9.40072.0
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a3320f483bb7ea84250e568ef2c33954472acebae8ab63c6d76eb19b7e8fb8f6
|
| 3 |
+
size 17235
|
runs/Jul10_16-39-45_dd8486c7a8d9/events.out.tfevents.1752170515.dd8486c7a8d9.40072.1
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:df100472e7d87c5795f16a6a08c72ba74d58eeeb33b0e8ddb1b6347109957efc
|
| 3 |
+
size 406
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5560
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6afc11505f01720f012673877c568b48dbac8966e126f892a3f50c970290760e
|
| 3 |
size 5560
|