floflodebilbao commited on
Commit
797856e
·
verified ·
1 Parent(s): 133dd1e

End of training

Browse files
README.md CHANGED
@@ -22,21 +22,21 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
- - Loss: 4.2146
26
- - Rouge1: 0.1397
27
- - Rouge2: 0.0273
28
- - Rougel: 0.0982
29
- - Rougelsum: 0.097
30
- - Gen Len: 62.6
31
- - Bleu: 0.0075
32
- - Precisions: 0.0304
33
  - Brevity Penalty: 1.0
34
- - Length Ratio: 1.6642
35
- - Translation Length: 2032.0
36
  - Reference Length: 1221.0
37
- - Precision: 0.7816
38
- - Recall: 0.8361
39
- - F1: 0.8076
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
@@ -64,27 +64,21 @@ The following hyperparameters were used during training:
64
  - total_train_batch_size: 16
65
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
66
  - lr_scheduler_type: linear
67
- - num_epochs: 15
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
72
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
73
- | No log | 1.0 | 7 | 26.0662 | 0.2775 | 0.0911 | 0.2084 | 0.2082 | 62.06 | 0.0413 | 0.0659 | 1.0 | 1.8026 | 2201.0 | 1221.0 | 0.8359 | 0.8722 | 0.8536 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
- | No log | 2.0 | 14 | 23.4955 | 0.273 | 0.0899 | 0.202 | 0.2032 | 61.62 | 0.0408 | 0.0663 | 1.0 | 1.7772 | 2170.0 | 1221.0 | 0.8376 | 0.8724 | 0.8546 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
- | No log | 3.0 | 21 | 21.3631 | 0.2652 | 0.0914 | 0.2003 | 0.2016 | 60.82 | 0.0412 | 0.0657 | 1.0 | 1.7609 | 2150.0 | 1221.0 | 0.8383 | 0.8706 | 0.854 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
- | No log | 4.0 | 28 | 19.5442 | 0.2774 | 0.098 | 0.2076 | 0.2081 | 59.64 | 0.0445 | 0.07 | 1.0 | 1.7387 | 2123.0 | 1221.0 | 0.8413 | 0.8726 | 0.8566 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
- | No log | 5.0 | 35 | 17.8808 | 0.2684 | 0.0934 | 0.1991 | 0.2008 | 59.64 | 0.0437 | 0.0681 | 1.0 | 1.7363 | 2120.0 | 1221.0 | 0.8391 | 0.8697 | 0.854 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
78
- | No log | 6.0 | 42 | 16.2547 | 0.265 | 0.088 | 0.1911 | 0.1937 | 59.42 | 0.0399 | 0.0642 | 1.0 | 1.7248 | 2106.0 | 1221.0 | 0.838 | 0.8682 | 0.8528 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
79
- | No log | 7.0 | 49 | 14.4184 | 0.2621 | 0.0856 | 0.185 | 0.1849 | 59.4 | 0.0412 | 0.0648 | 1.0 | 1.7224 | 2103.0 | 1221.0 | 0.8373 | 0.8689 | 0.8527 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
80
- | No log | 8.0 | 56 | 11.9307 | 0.2634 | 0.0904 | 0.1875 | 0.1875 | 59.06 | 0.0478 | 0.0696 | 1.0 | 1.7191 | 2099.0 | 1221.0 | 0.8369 | 0.8688 | 0.8525 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
81
- | No log | 9.0 | 63 | 8.0749 | 0.2548 | 0.0802 | 0.1823 | 0.1821 | 56.46 | 0.0471 | 0.0686 | 1.0 | 1.6355 | 1997.0 | 1221.0 | 0.837 | 0.864 | 0.8502 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
82
- | No log | 10.0 | 70 | 4.2463 | 0.1698 | 0.0315 | 0.122 | 0.1209 | 57.5 | 0.0065 | 0.0337 | 1.0 | 1.5995 | 1953.0 | 1221.0 | 0.8068 | 0.8444 | 0.8249 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
83
- | No log | 11.0 | 77 | 4.1345 | 0.1154 | 0.0265 | 0.0863 | 0.0865 | 59.86 | 0.007 | 0.0268 | 1.0 | 1.5119 | 1846.0 | 1221.0 | 0.7703 | 0.828 | 0.7976 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
84
- | No log | 12.0 | 84 | 4.2879 | 0.1033 | 0.0242 | 0.0844 | 0.0842 | 63.0 | 0.0081 | 0.0261 | 1.0 | 1.5111 | 1845.0 | 1221.0 | 0.7695 | 0.8276 | 0.797 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
85
- | No log | 13.0 | 91 | 4.3117 | 0.1271 | 0.0306 | 0.0961 | 0.0956 | 63.0 | 0.0086 | 0.0297 | 1.0 | 1.6126 | 1969.0 | 1221.0 | 0.7731 | 0.8332 | 0.8015 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
86
- | No log | 14.0 | 98 | 4.2535 | 0.1291 | 0.0277 | 0.089 | 0.0884 | 63.0 | 0.0077 | 0.0286 | 1.0 | 1.6847 | 2057.0 | 1221.0 | 0.7804 | 0.835 | 0.8064 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
87
- | No log | 15.0 | 105 | 4.2146 | 0.1397 | 0.0273 | 0.0982 | 0.097 | 62.6 | 0.0075 | 0.0304 | 1.0 | 1.6642 | 2032.0 | 1221.0 | 0.7816 | 0.8361 | 0.8076 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
88
 
89
 
90
  ### Framework versions
 
22
 
23
  This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
+ - Loss: 16.6459
26
+ - Rouge1: 0.2654
27
+ - Rouge2: 0.091
28
+ - Rougel: 0.1959
29
+ - Rougelsum: 0.1964
30
+ - Gen Len: 59.98
31
+ - Bleu: 0.0426
32
+ - Precisions: 0.0666
33
  - Brevity Penalty: 1.0
34
+ - Length Ratio: 1.742
35
+ - Translation Length: 2127.0
36
  - Reference Length: 1221.0
37
+ - Precision: 0.8389
38
+ - Recall: 0.8694
39
+ - F1: 0.8538
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
 
64
  - total_train_batch_size: 16
65
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
66
  - lr_scheduler_type: linear
67
+ - num_epochs: 9
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
72
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
73
+ | No log | 1.0 | 7 | 26.1287 | 0.276 | 0.09 | 0.2096 | 0.2096 | 62.06 | 0.0413 | 0.0659 | 1.0 | 1.8026 | 2201.0 | 1221.0 | 0.8359 | 0.8722 | 0.8536 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
+ | No log | 2.0 | 14 | 23.7190 | 0.273 | 0.0878 | 0.2019 | 0.2035 | 61.62 | 0.0408 | 0.0663 | 1.0 | 1.7772 | 2170.0 | 1221.0 | 0.8376 | 0.8724 | 0.8546 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
+ | No log | 3.0 | 21 | 21.7653 | 0.2651 | 0.0884 | 0.2018 | 0.204 | 60.82 | 0.0414 | 0.066 | 1.0 | 1.7518 | 2139.0 | 1221.0 | 0.8378 | 0.8708 | 0.8539 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
+ | No log | 4.0 | 28 | 20.2368 | 0.2759 | 0.0943 | 0.2072 | 0.209 | 60.0 | 0.0442 | 0.0697 | 1.0 | 1.7527 | 2140.0 | 1221.0 | 0.8409 | 0.8724 | 0.8563 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
+ | No log | 5.0 | 35 | 19.0093 | 0.2721 | 0.0908 | 0.2035 | 0.2044 | 59.82 | 0.0436 | 0.0687 | 1.0 | 1.7428 | 2128.0 | 1221.0 | 0.8401 | 0.8705 | 0.855 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
78
+ | No log | 6.0 | 42 | 18.0513 | 0.269 | 0.0927 | 0.2011 | 0.2019 | 59.82 | 0.0437 | 0.0682 | 1.0 | 1.7404 | 2125.0 | 1221.0 | 0.839 | 0.8698 | 0.8541 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
79
+ | No log | 7.0 | 49 | 17.3156 | 0.2699 | 0.0921 | 0.1998 | 0.2009 | 59.82 | 0.0438 | 0.0683 | 1.0 | 1.7371 | 2121.0 | 1221.0 | 0.8393 | 0.8703 | 0.8544 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
80
+ | No log | 8.0 | 56 | 16.8291 | 0.2679 | 0.0922 | 0.1988 | 0.1997 | 59.98 | 0.0437 | 0.0679 | 1.0 | 1.7461 | 2132.0 | 1221.0 | 0.8394 | 0.8699 | 0.8543 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
81
+ | No log | 9.0 | 63 | 16.6459 | 0.2654 | 0.091 | 0.1959 | 0.1964 | 59.98 | 0.0426 | 0.0666 | 1.0 | 1.742 | 2127.0 | 1221.0 | 0.8389 | 0.8694 | 0.8538 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
 
 
 
 
 
 
82
 
83
 
84
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a290064e280e4f021c894bdb6108e2564a85a9399219cc48f31a7af6ce4f385
3
  size 1187780840
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d388b6eeba751a4910913a8132a6c399e298480b188ef5644000cf02bfb5ed57
3
  size 1187780840
runs/Jul09_14-26-22_tardis/events.out.tfevents.1752063983.tardis.85888.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaff097b750986857ac86ec10cef561f41d5fd947a59674fed9e57064b09e722
3
+ size 15884
tokenizer.json CHANGED
@@ -1,21 +1,7 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 64,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
- "padding": {
10
- "strategy": {
11
- "Fixed": 64
12
- },
13
- "direction": "Right",
14
- "pad_to_multiple_of": null,
15
- "pad_id": 0,
16
- "pad_type_id": 0,
17
- "pad_token": "<pad>"
18
- },
19
  "added_tokens": [
20
  {
21
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:324f5823a00f4ac08d38a72d3895756b980b286a3b984c65ffaaa8e138c1221b
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34e4a1e6a5754d4bbc834cc6ba2a4c0e8628f4e27768d5d79bd757e3fa62e48e
3
  size 5905