TasmiaAzmi commited on
Commit
60a857c
·
1 Parent(s): d4218a9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -28
README.md CHANGED
@@ -12,9 +12,9 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # masked-sentence-generation
14
 
15
- This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 2.8396
18
 
19
  ## Model description
20
 
@@ -34,40 +34,23 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0001
37
- - train_batch_size: 4
38
- - eval_batch_size: 4
39
  - seed: 42
40
  - gradient_accumulation_steps: 16
41
- - total_train_batch_size: 64
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - num_epochs: 7
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:----:|:---------------:|
50
- | 3.1508 | 0.32 | 100 | 2.8654 |
51
- | 3.0787 | 0.64 | 200 | 2.8532 |
52
- | 3.0573 | 0.96 | 300 | 2.8440 |
53
- | 2.984 | 1.28 | 400 | 2.8398 |
54
- | 2.9727 | 1.6 | 500 | 2.8364 |
55
- | 2.9781 | 1.92 | 600 | 2.8336 |
56
- | 2.9238 | 2.24 | 700 | 2.8346 |
57
- | 2.8974 | 2.56 | 800 | 2.8334 |
58
- | 2.894 | 2.88 | 900 | 2.8312 |
59
- | 2.8716 | 3.2 | 1000 | 2.8348 |
60
- | 2.8447 | 3.52 | 1100 | 2.8332 |
61
- | 2.8467 | 3.84 | 1200 | 2.8332 |
62
- | 2.8128 | 4.16 | 1300 | 2.8357 |
63
- | 2.8007 | 4.48 | 1400 | 2.8362 |
64
- | 2.8071 | 4.8 | 1500 | 2.8367 |
65
- | 2.796 | 5.12 | 1600 | 2.8380 |
66
- | 2.7628 | 5.44 | 1700 | 2.8387 |
67
- | 2.7694 | 5.76 | 1800 | 2.8378 |
68
- | 2.7734 | 6.08 | 1900 | 2.8384 |
69
- | 2.7473 | 6.4 | 2000 | 2.8403 |
70
- | 2.758 | 6.72 | 2100 | 2.8396 |
71
 
72
 
73
  ### Framework versions
 
12
 
13
  # masked-sentence-generation
14
 
15
+ This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: nan
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0001
37
+ - train_batch_size: 1
38
+ - eval_batch_size: 1
39
  - seed: 42
40
  - gradient_accumulation_steps: 16
41
+ - total_train_batch_size: 16
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - num_epochs: 7
45
 
46
  ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------------------------:|:-----:|:----:|:---------------:|
50
+ | 84911378280078883749363712.0000 | 1.5 | 100 | nan |
51
+ | 0.0 | 2.99 | 200 | nan |
52
+ | 0.0 | 4.49 | 300 | nan |
53
+ | 0.0 | 5.98 | 400 | nan |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
 
56
  ### Framework versions