End of training
Browse files- README.md +23 -29
- model.safetensors +1 -1
- runs/Jul09_14-26-22_tardis/events.out.tfevents.1752063983.tardis.85888.0 +3 -0
- tokenizer.json +2 -16
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -22,21 +22,21 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 22 |
|
| 23 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
-
- Loss:
|
| 26 |
-
- Rouge1: 0.
|
| 27 |
-
- Rouge2: 0.
|
| 28 |
-
- Rougel: 0.
|
| 29 |
-
- Rougelsum: 0.
|
| 30 |
-
- Gen Len:
|
| 31 |
-
- Bleu: 0.
|
| 32 |
-
- Precisions: 0.
|
| 33 |
- Brevity Penalty: 1.0
|
| 34 |
-
- Length Ratio: 1.
|
| 35 |
-
- Translation Length:
|
| 36 |
- Reference Length: 1221.0
|
| 37 |
-
- Precision: 0.
|
| 38 |
-
- Recall: 0.
|
| 39 |
-
- F1: 0.
|
| 40 |
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
|
| 41 |
|
| 42 |
## Model description
|
|
@@ -64,27 +64,21 @@ The following hyperparameters were used during training:
|
|
| 64 |
- total_train_batch_size: 16
|
| 65 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 66 |
- lr_scheduler_type: linear
|
| 67 |
-
- num_epochs:
|
| 68 |
|
| 69 |
### Training results
|
| 70 |
|
| 71 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
|
| 72 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
|
| 73 |
-
| No log | 1.0 | 7 | 26.
|
| 74 |
-
| No log | 2.0 | 14 | 23.
|
| 75 |
-
| No log | 3.0 | 21 | 21.
|
| 76 |
-
| No log | 4.0 | 28 |
|
| 77 |
-
| No log | 5.0 | 35 |
|
| 78 |
-
| No log | 6.0 | 42 |
|
| 79 |
-
| No log | 7.0 | 49 |
|
| 80 |
-
| No log | 8.0 | 56 |
|
| 81 |
-
| No log | 9.0 | 63 |
|
| 82 |
-
| No log | 10.0 | 70 | 4.2463 | 0.1698 | 0.0315 | 0.122 | 0.1209 | 57.5 | 0.0065 | 0.0337 | 1.0 | 1.5995 | 1953.0 | 1221.0 | 0.8068 | 0.8444 | 0.8249 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 83 |
-
| No log | 11.0 | 77 | 4.1345 | 0.1154 | 0.0265 | 0.0863 | 0.0865 | 59.86 | 0.007 | 0.0268 | 1.0 | 1.5119 | 1846.0 | 1221.0 | 0.7703 | 0.828 | 0.7976 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 84 |
-
| No log | 12.0 | 84 | 4.2879 | 0.1033 | 0.0242 | 0.0844 | 0.0842 | 63.0 | 0.0081 | 0.0261 | 1.0 | 1.5111 | 1845.0 | 1221.0 | 0.7695 | 0.8276 | 0.797 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 85 |
-
| No log | 13.0 | 91 | 4.3117 | 0.1271 | 0.0306 | 0.0961 | 0.0956 | 63.0 | 0.0086 | 0.0297 | 1.0 | 1.6126 | 1969.0 | 1221.0 | 0.7731 | 0.8332 | 0.8015 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 86 |
-
| No log | 14.0 | 98 | 4.2535 | 0.1291 | 0.0277 | 0.089 | 0.0884 | 63.0 | 0.0077 | 0.0286 | 1.0 | 1.6847 | 2057.0 | 1221.0 | 0.7804 | 0.835 | 0.8064 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 87 |
-
| No log | 15.0 | 105 | 4.2146 | 0.1397 | 0.0273 | 0.0982 | 0.097 | 62.6 | 0.0075 | 0.0304 | 1.0 | 1.6642 | 2032.0 | 1221.0 | 0.7816 | 0.8361 | 0.8076 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 88 |
|
| 89 |
|
| 90 |
### Framework versions
|
|
|
|
| 22 |
|
| 23 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
+
- Loss: 16.6459
|
| 26 |
+
- Rouge1: 0.2654
|
| 27 |
+
- Rouge2: 0.091
|
| 28 |
+
- Rougel: 0.1959
|
| 29 |
+
- Rougelsum: 0.1964
|
| 30 |
+
- Gen Len: 59.98
|
| 31 |
+
- Bleu: 0.0426
|
| 32 |
+
- Precisions: 0.0666
|
| 33 |
- Brevity Penalty: 1.0
|
| 34 |
+
- Length Ratio: 1.742
|
| 35 |
+
- Translation Length: 2127.0
|
| 36 |
- Reference Length: 1221.0
|
| 37 |
+
- Precision: 0.8389
|
| 38 |
+
- Recall: 0.8694
|
| 39 |
+
- F1: 0.8538
|
| 40 |
- Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
|
| 41 |
|
| 42 |
## Model description
|
|
|
|
| 64 |
- total_train_batch_size: 16
|
| 65 |
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 66 |
- lr_scheduler_type: linear
|
| 67 |
+
- num_epochs: 9
|
| 68 |
|
| 69 |
### Training results
|
| 70 |
|
| 71 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
|
| 72 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
|
| 73 |
+
| No log | 1.0 | 7 | 26.1287 | 0.276 | 0.09 | 0.2096 | 0.2096 | 62.06 | 0.0413 | 0.0659 | 1.0 | 1.8026 | 2201.0 | 1221.0 | 0.8359 | 0.8722 | 0.8536 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 74 |
+
| No log | 2.0 | 14 | 23.7190 | 0.273 | 0.0878 | 0.2019 | 0.2035 | 61.62 | 0.0408 | 0.0663 | 1.0 | 1.7772 | 2170.0 | 1221.0 | 0.8376 | 0.8724 | 0.8546 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 75 |
+
| No log | 3.0 | 21 | 21.7653 | 0.2651 | 0.0884 | 0.2018 | 0.204 | 60.82 | 0.0414 | 0.066 | 1.0 | 1.7518 | 2139.0 | 1221.0 | 0.8378 | 0.8708 | 0.8539 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 76 |
+
| No log | 4.0 | 28 | 20.2368 | 0.2759 | 0.0943 | 0.2072 | 0.209 | 60.0 | 0.0442 | 0.0697 | 1.0 | 1.7527 | 2140.0 | 1221.0 | 0.8409 | 0.8724 | 0.8563 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 77 |
+
| No log | 5.0 | 35 | 19.0093 | 0.2721 | 0.0908 | 0.2035 | 0.2044 | 59.82 | 0.0436 | 0.0687 | 1.0 | 1.7428 | 2128.0 | 1221.0 | 0.8401 | 0.8705 | 0.855 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 78 |
+
| No log | 6.0 | 42 | 18.0513 | 0.269 | 0.0927 | 0.2011 | 0.2019 | 59.82 | 0.0437 | 0.0682 | 1.0 | 1.7404 | 2125.0 | 1221.0 | 0.839 | 0.8698 | 0.8541 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 79 |
+
| No log | 7.0 | 49 | 17.3156 | 0.2699 | 0.0921 | 0.1998 | 0.2009 | 59.82 | 0.0438 | 0.0683 | 1.0 | 1.7371 | 2121.0 | 1221.0 | 0.8393 | 0.8703 | 0.8544 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 80 |
+
| No log | 8.0 | 56 | 16.8291 | 0.2679 | 0.0922 | 0.1988 | 0.1997 | 59.98 | 0.0437 | 0.0679 | 1.0 | 1.7461 | 2132.0 | 1221.0 | 0.8394 | 0.8699 | 0.8543 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
| 81 |
+
| No log | 9.0 | 63 | 16.6459 | 0.2654 | 0.091 | 0.1959 | 0.1964 | 59.98 | 0.0426 | 0.0666 | 1.0 | 1.742 | 2127.0 | 1221.0 | 0.8389 | 0.8694 | 0.8538 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
|
| 84 |
### Framework versions
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 1187780840
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d388b6eeba751a4910913a8132a6c399e298480b188ef5644000cf02bfb5ed57
|
| 3 |
size 1187780840
|
runs/Jul09_14-26-22_tardis/events.out.tfevents.1752063983.tardis.85888.0
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eaff097b750986857ac86ec10cef561f41d5fd947a59674fed9e57064b09e722
|
| 3 |
+
size 15884
|
tokenizer.json
CHANGED
|
@@ -1,21 +1,7 @@
|
|
| 1 |
{
|
| 2 |
"version": "1.0",
|
| 3 |
-
"truncation":
|
| 4 |
-
|
| 5 |
-
"max_length": 64,
|
| 6 |
-
"strategy": "LongestFirst",
|
| 7 |
-
"stride": 0
|
| 8 |
-
},
|
| 9 |
-
"padding": {
|
| 10 |
-
"strategy": {
|
| 11 |
-
"Fixed": 64
|
| 12 |
-
},
|
| 13 |
-
"direction": "Right",
|
| 14 |
-
"pad_to_multiple_of": null,
|
| 15 |
-
"pad_id": 0,
|
| 16 |
-
"pad_type_id": 0,
|
| 17 |
-
"pad_token": "<pad>"
|
| 18 |
-
},
|
| 19 |
"added_tokens": [
|
| 20 |
{
|
| 21 |
"id": 0,
|
|
|
|
| 1 |
{
|
| 2 |
"version": "1.0",
|
| 3 |
+
"truncation": null,
|
| 4 |
+
"padding": null,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
"added_tokens": [
|
| 6 |
{
|
| 7 |
"id": 0,
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5905
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:34e4a1e6a5754d4bbc834cc6ba2a4c0e8628f4e27768d5d79bd757e3fa62e48e
|
| 3 |
size 5905
|