End of training
Browse files- README.md +26 -18
- model.safetensors +1 -1
- runs/Jul10_14-36-18_tardis/events.out.tfevents.1752150979.tardis.66384.0 +3 -0
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -22,21 +22,21 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 22 |
|
| 23 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
-
- Loss:
|
| 26 |
-
- Rouge1: 0.
|
| 27 |
-
- Rouge2: 0.
|
| 28 |
-
- Rougel: 0.
|
| 29 |
-
- Rougelsum: 0.
|
| 30 |
-
- Gen Len:
|
| 31 |
-
- Bleu: 0.
|
| 32 |
-
- Precisions: 0.
|
| 33 |
- Brevity Penalty: 1.0
|
| 34 |
-
- Length Ratio: 1.
|
| 35 |
-
- Translation Length:
|
| 36 |
- Reference Length: 1221.0
|
| 37 |
-
- Precision: 0.
|
| 38 |
-
- Recall: 0.
|
| 39 |
-
- F1: 0.
|
| 40 |
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
|
| 41 |
|
| 42 |
## Model description
|
|
@@ -64,16 +64,24 @@ The following hyperparameters were used during training:
|
|
| 64 |
- total_train_batch_size: 16
|
| 65 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 66 |
- lr_scheduler_type: linear
|
| 67 |
-
- num_epochs:
|
| 68 |
|
| 69 |
### Training results
|
| 70 |
|
| 71 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
|
| 72 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
|
| 73 |
-
|
|
| 74 |
-
|
|
| 75 |
-
|
|
| 76 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
|
| 78 |
|
| 79 |
### Framework versions
|
|
|
|
| 22 |
|
| 23 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
+
- Loss: 7.4155
|
| 26 |
+
- Rouge1: 0.2826
|
| 27 |
+
- Rouge2: 0.1063
|
| 28 |
+
- Rougel: 0.2061
|
| 29 |
+
- Rougelsum: 0.2052
|
| 30 |
+
- Gen Len: 63.0
|
| 31 |
+
- Bleu: 0.0492
|
| 32 |
+
- Precisions: 0.0722
|
| 33 |
- Brevity Penalty: 1.0
|
| 34 |
+
- Length Ratio: 1.8591
|
| 35 |
+
- Translation Length: 2270.0
|
| 36 |
- Reference Length: 1221.0
|
| 37 |
+
- Precision: 0.8414
|
| 38 |
+
- Recall: 0.8739
|
| 39 |
+
- F1: 0.8573
|
| 40 |
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
|
| 41 |
|
| 42 |
## Model description
|
|
|
|
| 64 |
- total_train_batch_size: 16
|
| 65 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 66 |
- lr_scheduler_type: linear
|
| 67 |
+
- num_epochs: 12
|
| 68 |
|
| 69 |
### Training results
|
| 70 |
|
| 71 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
|
| 72 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
|
| 73 |
+
| No log | 1.0 | 7 | 24.9448 | 0.2712 | 0.0914 | 0.2009 | 0.2009 | 62.3 | 0.0448 | 0.0682 | 1.0 | 1.8034 | 2202.0 | 1221.0 | 0.8369 | 0.872 | 0.854 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 74 |
+
| No log | 2.0 | 14 | 22.3346 | 0.272 | 0.0918 | 0.2029 | 0.2028 | 62.18 | 0.0446 | 0.068 | 1.0 | 1.8116 | 2212.0 | 1221.0 | 0.8374 | 0.8719 | 0.8542 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 75 |
+
| No log | 3.0 | 21 | 20.5424 | 0.2747 | 0.0981 | 0.2024 | 0.2014 | 62.36 | 0.0451 | 0.0687 | 1.0 | 1.8321 | 2237.0 | 1221.0 | 0.8394 | 0.8723 | 0.8555 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 76 |
+
| No log | 4.0 | 28 | 18.9191 | 0.2817 | 0.1109 | 0.2093 | 0.2087 | 62.42 | 0.0532 | 0.0754 | 1.0 | 1.8165 | 2218.0 | 1221.0 | 0.8397 | 0.8746 | 0.8567 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 77 |
+
| No log | 5.0 | 35 | 17.4649 | 0.28 | 0.1103 | 0.2079 | 0.2071 | 62.42 | 0.0537 | 0.0761 | 1.0 | 1.8141 | 2215.0 | 1221.0 | 0.8399 | 0.8739 | 0.8565 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 78 |
+
| No log | 6.0 | 42 | 16.0864 | 0.291 | 0.1082 | 0.2125 | 0.2118 | 62.68 | 0.0527 | 0.0762 | 1.0 | 1.8305 | 2235.0 | 1221.0 | 0.8425 | 0.8758 | 0.8588 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 79 |
+
| No log | 7.0 | 49 | 14.7017 | 0.296 | 0.112 | 0.2185 | 0.2176 | 62.78 | 0.0537 | 0.0767 | 1.0 | 1.8256 | 2229.0 | 1221.0 | 0.8433 | 0.8765 | 0.8595 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 80 |
+
| No log | 8.0 | 56 | 13.1332 | 0.2904 | 0.1039 | 0.213 | 0.2131 | 62.78 | 0.047 | 0.0714 | 1.0 | 1.8239 | 2227.0 | 1221.0 | 0.8421 | 0.8751 | 0.8582 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 81 |
+
| No log | 9.0 | 63 | 11.3128 | 0.2812 | 0.0975 | 0.2032 | 0.2031 | 62.8 | 0.043 | 0.0674 | 1.0 | 1.8436 | 2251.0 | 1221.0 | 0.8405 | 0.8729 | 0.8563 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 82 |
+
| No log | 10.0 | 70 | 9.4296 | 0.2744 | 0.0957 | 0.1972 | 0.1977 | 63.0 | 0.0431 | 0.067 | 1.0 | 1.8518 | 2261.0 | 1221.0 | 0.8399 | 0.8727 | 0.856 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 83 |
+
| No log | 11.0 | 77 | 7.9702 | 0.278 | 0.0973 | 0.1966 | 0.1971 | 63.0 | 0.0432 | 0.0674 | 1.0 | 1.8501 | 2259.0 | 1221.0 | 0.84 | 0.8732 | 0.8562 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 84 |
+
| No log | 12.0 | 84 | 7.4155 | 0.2826 | 0.1063 | 0.2061 | 0.2052 | 63.0 | 0.0492 | 0.0722 | 1.0 | 1.8591 | 2270.0 | 1221.0 | 0.8414 | 0.8739 | 0.8573 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 85 |
|
| 86 |
|
| 87 |
### Framework versions
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1187780840
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c21122d7b9546595577586999502a71e8fdce2ffe6238d8f59a192487e5e10ac
|
| 3 |
size 1187780840
|
runs/Jul10_14-36-18_tardis/events.out.tfevents.1752150979.tardis.66384.0
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:545ff3956ca3e724ef66b9941cf45243121931307bebf4598084827e84fe3fdb
|
| 3 |
+
size 19278
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5905
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d935486008916a5193daf9dadf6ae0bb6a112bdb8132b1b1b43eb4d797444993
|
| 3 |
size 5905
|