zoya-hammadk commited on
Commit
35d9058
·
verified ·
1 Parent(s): b4c6b13

End of training

Browse files
Files changed (1) hide show
  1. README.md +5 -14
README.md CHANGED
@@ -15,11 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
15
  # nutrivision-roberta
16
 
17
  This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 64.6999
20
- - Mse: 64.6999
21
- - Rmse: 8.0436
22
- - R2: -0.7557
23
 
24
  ## Model description
25
 
@@ -45,19 +40,15 @@ The following hyperparameters were used during training:
45
  - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: cosine
47
  - lr_scheduler_warmup_ratio: 0.03
48
- - num_epochs: 1
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | R2 |
53
- |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:-------:|
54
- | 400.7439 | 0.4348 | 10 | 92.1141 | 92.1141 | 9.5976 | -1.4997 |
55
- | 240.2138 | 0.8696 | 20 | 64.6999 | 64.6999 | 8.0436 | -0.7557 |
56
 
57
 
58
  ### Framework versions
59
 
60
- - Transformers 4.48.3
61
- - Pytorch 2.5.1+cu124
62
- - Datasets 3.6.0
63
- - Tokenizers 0.21.0
 
15
  # nutrivision-roberta
16
 
17
  This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
 
 
 
 
 
18
 
19
  ## Model description
20
 
 
40
  - optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: cosine
42
  - lr_scheduler_warmup_ratio: 0.03
43
+ - num_epochs: 5
44
 
45
  ### Training results
46
 
 
 
 
 
47
 
48
 
49
  ### Framework versions
50
 
51
+ - Transformers 4.51.3
52
+ - Pytorch 2.6.0+cu124
53
+ - Datasets 2.14.4
54
+ - Tokenizers 0.21.1